Introduction

Filecoin is a distributed storage network based on a blockchain mechanism. Filecoin miners can elect to provide storage capacity for the network, and thereby earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify that they are providing the capacity specified. In addition, Filecoin enables parties to exchange FIL currency through transactions recorded in a shared ledger on the Filecoin blockchain. Rather than using Nakamoto-style proof of work to maintain consensus on the chain, however, Filecoin uses proof of storage itself: a miner’s power in the consensus protocol is proportional to the amount of storage it provides.

The Filecoin blockchain not only maintains the ledger for FIL transactions and accounts, but also implements the Filecoin VM, a replicated state machine which executes a variety of cryptographic contracts and market mechanisms among participants on the network. These contracts include storage deals, in which clients pay FIL currency to miners in exchange for storing the specific file data that the clients request. Via the distributed implementation of the Filecoin VM, storage deals and other contract mechanisms recorded on the chain continue to be processed over time, without requiring further interaction from the original parties (such as the clients who requested the data storage).

Spec Status

Each section of the spec must be stable and audited before it is considered done. The state of each section is tracked below.

  • The State column indicates the stability as defined in the legend.
  • The Theory Audit column shows the date of the last theory audit with a link to the report.

Spec Status Legend

Spec state Label
Unlikely to change in the foreseeable future. Stable
All content is correct. Important details are covered. Reliable
All content is correct. Details are being worked on. Draft/WIP
Do not follow. Important things have changed. Incorrect
No work has been done yet. Missing

Spec Status Overview

Section State Theory Audit
1 Introduction Reliable
1.2 Architecture Diagrams Reliable
1.3 Key Concepts Reliable
1.4 Filecoin VM Reliable
1.5 System Decomposition Reliable
1.5.1 What are Systems? How do they work? Reliable
1.5.2 Implementing Systems Reliable
2 Systems Draft/WIP
2.1 Filecoin Nodes Reliable
2.1.1 Node Types Stable
2.1.2 Node Repository Stable
2.1.2.1 Key Store Reliable
2.1.2.2 IPLD Store Stable Draft/WIP
2.1.3 Network Interface Stable
2.1.4 Clock Reliable
2.2 Files & Data Reliable
2.2.1 File Reliable
2.2.1.1 FileStore - Local Storage for Files Reliable
2.2.2 The Filecoin Piece Stable
2.2.3 Data Transfer in Filecoin Stable
2.2.4 Data Formats and Serialization Reliable
2.3 Virtual Machine Reliable
2.3.1 VM Actor Interface Reliable Draft/WIP
2.3.2 State Tree Reliable Draft/WIP
2.3.3 VM Message - Actor Method Invocation Reliable Draft/WIP
2.3.4 VM Runtime Environment (Inside the VM) Reliable
2.3.5 Gas Fees Reliable Report Coming Soon
2.3.6 System Actors Reliable Reports
2.3.7 VM Interpreter - Message Invocation (Outside VM) Draft/WIP Draft/WIP
2.4 Blockchain Reliable Draft/WIP
2.4.1 Blocks Reliable
2.4.1.1 Block Reliable
2.4.1.2 Tipset Reliable
2.4.1.3 Chain Manager Reliable
2.4.1.4 Block Producer Reliable Draft/WIP
2.4.2 Message Pool Stable Draft/WIP
2.4.2.1 Message Propagation Stable
2.4.2.2 Message Storage Stable
2.4.3 ChainSync Stable
2.4.4 Storage Power Consensus Reliable Draft/WIP
2.4.4.6 Storage Power Actor Reliable Draft/WIP
2.5 Token Reliable
2.5.1 Minting Model Reliable
2.5.2 Block Reward Minting Reliable
2.5.3 Token Allocation Reliable
2.5.4 Payment Channels Stable Draft/WIP
2.5.5 Multisig Wallet & Actor Reliable Reports
2.6 Storage Mining Reliable Draft/WIP
2.6.1 Sector Stable
2.6.1.1 Sector Lifecycle Stable
2.6.1.2 Sector Quality Stable
2.6.1.3 Sector Sealing Stable Draft/WIP
2.6.1.4 Sector Faults Stable Draft/WIP
2.6.1.5 Sector Recovery Reliable Draft/WIP
2.6.1.6 Adding Storage Stable Draft/WIP
2.6.1.7 Upgrading Sectors Stable Draft/WIP
2.6.2 Storage Miner Reliable Draft/WIP
2.6.2.4 Storage Mining Cycle Reliable Draft/WIP
2.6.2.5 Storage Miner Actor Draft/WIP Reports
2.6.3 Miner Collaterals Reliable
2.6.4 Storage Proving Draft/WIP Draft/WIP
2.6.4.2 Sector Poster Draft/WIP Draft/WIP
2.6.4.3 Sector Sealer Draft/WIP Draft/WIP
2.7 Markets Stable
2.7.1 Storage Market in Filecoin Stable Draft/WIP
2.7.2 Storage Market On-Chain Components Reliable Draft/WIP
2.7.2.3 Storage Market Actor Reliable Reports
2.7.2.4 Storage Deal Flow Reliable Draft/WIP
2.7.2.5 Storage Deal States Reliable
2.7.2.6 Faults Reliable Draft/WIP
2.7.3 Retrieval Market in Filecoin Stable
2.7.3.5 Retrieval Peer Resolver Stable
2.7.3.6 Retrieval Protocols Stable
2.7.3.7 Retrieval Client Stable
2.7.3.8 Retrieval Provider (Miner) Stable
2.7.3.9 Retrieval Deal Status Stable
3 Libraries Reliable
3.1 DRAND Stable Reports
3.2 IPFS Stable Draft/WIP
3.3 Multiformats Stable
3.4 IPLD Stable
3.5 Libp2p Stable Draft/WIP
4 Algorithms Draft/WIP
4.1 Expected Consensus Reliable Draft/WIP
4.2 Proof-of-Storage Reliable Draft/WIP
4.2.2 Proof-of-Replication (PoRep) Reliable Draft/WIP
4.2.3 Proof-of-Spacetime (PoSt) Reliable Draft/WIP
4.3 Stacked DRG Proof of Replication Stable Report Coming Soon
4.3.16 SDR Notation, Constants, and Types Stable Report Coming Soon
4.4 BlockSync Stable
4.5 GossipSub Stable Reports
4.6 Cryptographic Primitives Draft/WIP
4.6.1 Signatures Draft/WIP Report Coming Soon
4.6.2 Verifiable Random Function Incorrect
4.6.3 Randomness Reliable Draft/WIP
4.6.4 Poseidon Incorrect Missing
4.7 Verified Clients Draft/WIP Draft/WIP
4.8 Filecoin CryptoEconomics Reliable Draft/WIP
5 Glossary Reliable
6 Appendix Draft/WIP
6.1 Filecoin Address Reliable
6.2 Data Structures Reliable
6.3 Filecoin Parameters Draft/WIP
6.4 Audit Reports Reliable
7 Filecoin Implementations Reliable
7.1 Lotus Reliable
7.2 Venus Reliable
7.3 Forest Reliable
7.4 Fuhon (cpp-filecoin) Reliable
8 Releases

Spec Stabilization Progress

This progress bar shows what percentage of the spec sections are considered stable.

WIP 11% Reliable 55% Stable 32%

Implementations Status

Known implementations of the filecoin spec are tracked below, with their current CI build status, their test coverage as reported by codecov.io, and a link to their last security audit report where one exists.

Repo Language CI Test Coverage Security Audit
lotus go Failed 40% Reports
go-fil-markets go Passed 58% Reports
specs-actors go Unknown 69% Reports
rust Unknown Unknown Reports
venus go Unknown 24% Missing
forest rust Passed 55% Missing
cpp-filecoin c++ Passed 45% Missing

Architecture Diagrams

Actor State Diagram

Actor State Diagram

Key Concepts

For clarity, we refer the following types of entities to describe implementations of the Filecoin protocol:

  • Data structures are collections of semantically-tagged data members (e.g., structs, interfaces, or enums).

  • Functions are computational procedures that do not depend on external state (i.e., mathematical functions, or programming language functions that do not refer to global variables).

  • Components are sets of functionality that are intended to be represented as single software units in the implementation structure. Depending on the choice of language and the particular component, this might correspond to a single software module, a thread or process running some main loop, a disk-backed database, or a variety of other design choices. For example, the ChainSync is a component: it could be implemented as a process or thread running a single specified main loop, which waits for network messages and responds accordingly by recording and/or forwarding block data.

  • APIs are the interfaces for delivering messages to components. A client’s view of a given sub-protocol, such as a request to a miner node’s Storage Provider component to store files in the storage market, may require the execution of a series of API requests.

  • Nodes are complete software and hardware systems that interact with the protocol. A node might be constantly running several of the above components, participating in several subsystems, and exposing APIs locally and/or over the network, depending on the node configuration. The term full node refers to a system that runs all of the above components and supports all of the APIs detailed in the spec.

  • Subsystems are conceptual divisions of the entire Filecoin protocol, either in terms of complete protocols (such as the Storage Market or Retrieval Market), or in terms of functionality (such as the VM - Virtual Machine). They do not necessarily correspond to any particular node or software component.

  • Actors are virtual entities embodied in the state of the Filecoin VM. Protocol actors are analogous to participants in smart contracts; an actor carries a FIL currency balance and can interact with other actors via the operations of the VM, but does not necessarily correspond to any particular node or software component.

Filecoin VM

The majority of Filecoin’s user facing functionality (payments, storage market, power table, etc) is managed through the Filecoin Virtual Machine (Filecoin VM). The network generates a series of blocks, and agrees which ‘chain’ of blocks is the correct one. Each block contains a series of state transitions called messages, and a checkpoint of the current global state after the application of those messages.

The global state here consists of a set of actors, each with their own private state.

An actor is the Filecoin equivalent of Ethereum’s smart contracts, it is essentially an ‘object’ in the filecoin network with state and a set of methods that can be used to interact with it. Every actor has a Filecoin balance attributed to it, a state pointer, a code CID which tells the system what type of actor it is, and a nonce which tracks the number of messages sent by this actor.

There are two routes to calling a method on an actor. First, to call a method as an external participant of the system (aka, a normal user with Filecoin) you must send a signed message to the network, and pay a fee to the miner that includes your message. The signature on the message must match the key associated with an account with sufficient Filecoin to pay for the message’s execution. The fee here is equivalent to transaction fees in Bitcoin and Ethereum, where it is proportional to the work that is done to process the message (Bitcoin prices messages per byte, Ethereum uses the concept of ‘gas’. We also use ‘gas’).

Second, an actor may call a method on another actor during the invocation of one of its methods. However, the only time this may happen is as a result of some actor being invoked by an external users message (note: an actor called by a user may call another actor that then calls another actor, as many layers deep as the execution can afford to run for).

For full implementation details, see the VM Subsystem.

System Decomposition

What are Systems? How do they work?

Filecoin decouples and modularizes functionality into loosely-joined systems. Each system adds significant functionality, usually to achieve a set of important and tightly related goals.

For example, the Blockchain System provides structures like Block, Tipset, and Chain, and provides functionality like Block Sync, Block Propagation, Block Validation, Chain Selection, and Chain Access. This is separated from the Files, Pieces, Piece Preparation, and Data Transfer. Both of these systems are separated from the Markets, which provide Orders, Deals, Market Visibility, and Deal Settlement.

Why is System decoupling useful?

This decoupling is useful for:

  • Implementation Boundaries: it is possible to build implementations of Filecoin that only implement a subset of systems. This is especially useful for Implementation Diversity: we want many implementations of security critical systems (eg Blockchain), but do not need many implementations of Systems that can be decoupled.
  • Runtime Decoupling: system decoupling makes it easier to build and run Filecoin Nodes that isolate Systems into separate programs, and even separate physical computers.
  • Security Isolation: some systems require higher operational security than others. System decoupling allows implementations to meet their security and functionality needs. A good example of this is separating Blockchain processing from Data Transfer.
  • Scalability: systems, and various use cases, may drive different performance requirements for different operators. System decoupling makes it easier for operators to scale their deployments along system boundaries.

Filecoin Nodes don’t need all the systems

Filecoin Nodes vary significantly and do not need all the systems. Most systems are only needed for a subset of use cases.

For example, the Blockchain System is required for synchronizing the chain, participating in secure consensus, storage mining, and chain validation. Many Filecoin Nodes do not need the chain and can perform their work by just fetching content from the latest StateTree, from a node they trust.

Note: Filecoin does not use the “full node” or “light client” terminology, in wide use in Bitcoin and other blockchain networks. In filecoin, these terms are not well defined. It is best to define nodes in terms of their capabilities, and therefore, in terms of the Systems they run. For example:

  • Chain Verifier Node: Runs the Blockchain system. Can sync and validate the chain. Cannot mine or produce blocks.
  • Client Node: Runs the Blockchain, Market, and Data Transfer systems. Can sync and validate the chain. Cannot mine or produce blocks.
  • Retrieval Miner Node: Runs the Market and Data Transfer systems. Does not need the chain. Can make Retrieval Deals (Retrieval Provider side). Can send Clients data, and get paid for it.
  • Storage Miner Node: Runs the Blockchain, Storage Market, Storage Mining systems. Can sync and validate the chain. Can make Storage Deals (Storage Provider side). Can seal stored data into sectors. Can acquire storage consensus power. Can mine and produce blocks.

Separating Systems

How do we determine what functionality belongs in one system vs another?

Drawing boundaries between systems is the art of separating tightly related functionality from unrelated parts. In a sense, we seek to keep tightly integrated components in the same system, and away from other unrelated components. This is sometimes straightforward, the boundaries naturally spring from the data structures or functionality. For example, it is straightforward to observe that Clients and Miners negotiating a deal with each other is very unrelated to VM Execution.

Sometimes this is harder, and it requires detangling, adding, or removing abstractions. For example, the StoragePowerActor and the StorageMarketActor were a single Actor previously. This caused a large coupling of functionality across StorageDeal making, the StorageMarket, markets in general, with Storage Mining, Sector Sealing, PoSt Generation, and more. Detangling these two sets of related functionality required breaking apart the one actor into two.

Decomposing within a System

Systems themselves decompose into smaller subunits. These are sometimes called “subsystems” to avoid confusion with the much larger, first-class Systems. Subsystems themselves may break down further. The naming here is not strictly enforced, as these subdivisions are more related to protocol and implementation engineering concerns than to user capabilities.

Implementing Systems

System Requirements

In order to make it easier to decouple functionality into systems, the Filecoin Protocol assumes a set of functionality available to all systems. This functionality can be achieved by implementations in a variety of ways, and should take the guidance here as a recommendation (SHOULD).

All Systems, as defined in this document, require the following:

  • Repository:
    • Local IpldStore. Some amount of persistent local storage for data structures (small structured objects). Systems expect to be initialized with an IpldStore in which to store data structures they expect to persist across crashes.
    • User Configuration Values. A small amount of user-editable configuration values. These should be easy for end-users to access, view, and edit.
    • Local, Secure KeyStore. A facility to use to generate and use cryptographic keys, which MUST remain secret to the Filecoin Node. Systems SHOULD NOT access the keys directly, and should do so over an abstraction (ie the KeyStore) which provides the ability to Encrypt, Decrypt, Sign, SigVerify, and more.
  • Local FileStore. Some amount of persistent local storage for files (large byte arrays). Systems expect to be initialized with a FileStore in which to store large files. Some systems (like Markets) may need to store and delete large volumes of smaller files (1MB - 10GB). Other systems (like Storage Mining) may need to store and delete large volumes of large files (1GB - 1TB).
  • Network. Most systems need access to the network, to be able to connect to their counterparts in other Filecoin Nodes. Systems expect to be initialized with a libp2p.Node on which they can mount their own protocols.
  • Clock. Some systems need access to current network time, some with low tolerance for drift. Systems expect to be initialized with a Clock from which to tell network time. Some systems (like Blockchain) require very little clock drift, and require secure time.

For this purpose, we use the FilecoinNode data structure, which is passed into all systems at initialization.

System Limitations

Further, Systems MUST abide by the following limitations:

  • Random crashes. A Filecoin Node may crash at any moment. Systems must be secure and consistent through crashes. This is primarily achieved by limiting the use of persistent state, persisting such state through Ipld data structures, and through the use of initialization routines that check state, and perhaps correct errors.
  • Isolation. Systems must communicate over well-defined, isolated interfaces. They must not build their critical functionality over a shared memory space. (Note: for performance, shared memory abstractions can be used to power IpldStore, FileStore, and libp2p, but the systems themselves should not require it.) This is not just an operational concern; it also significantly simplifies the protocol and makes it easier to understand, analyze, debug, and change.
  • No direct access to host OS Filesystem or Disk. Systems cannot access disks directly – they do so over the FileStore and IpldStore abstractions. This is to provide a high degree of portability and flexibility for end-users, especially storage miners and clients of large amounts of data, which need to be able to easily replace how their Filecoin Nodes access local storage.
  • No direct access to host OS Network stack or TCP/IP. Systems cannot access the network directly – they do so over the libp2p library. There must not be any other kind of network access. This provides a high degree of portability across platforms and network protocols, enabling Filecoin Nodes (and all their critical systems) to run in a wide variety of settings, using all kinds of protocols (eg Bluetooth, LANs, etc).

Systems

In this section we are detailing all the system components one by one in increasing level of complexity and/or interdependence to other system components. The interaction of the components between each other is only briefly discussed where appropriate, but the overall workflow is given in the Introduction section. In particular, in this section we discuss:

  • Filecoin Nodes: the different types of nodes that participate in the Filecoin Network, as well as important parts and processes that these nodes run, such as the key store and IPLD store, as well as the network interface to libp2p.
  • Files & Data: the data units of Filecoin, such as the Sectors and the Pieces.
  • Virtual Machine: the subcomponents of the Filecoin VM, such as the actors, i.e., the smart contracts that run on the Filecoin Blockchain, and the State Tree.
  • Blockchain: the main building blocks of the Filecoin blockchain, such as the structure of messages and blocks, the message pool, as well as how nodes synchronise the blockchain when they first join the network.
  • Token: the components needed for a wallet.
  • Storage Mining: the details of storage mining, storage power consensus, and how storage miners prove storage (without going into details of proofs, which are discussed later).
  • Markets: the storage and retrieval markets, which are primarily processes that take place off-chain, but are very important for the smooth operation of the decentralised storage market.

Filecoin Nodes

This section starts by discussing the concept of Filecoin Nodes. Although different node types in the Lotus implementation of Filecoin are less strictly defined than in other blockchain networks, there are different properties and features that different types of nodes should implement. In short, nodes are defined based on the set of services they provide.

In this section we also discuss issues related to storage of system files in Filecoin nodes. Note that by storage in this section we do not refer to the storage that a node commits for mining in the network, but rather the local storage repositories that it needs to have available for keys and IPLD data among other things.

In this section we are also discussing the network interface and how nodes find and connect with each other, how they interact and propagate messages using libp2p, as well as how to set the node’s clock.

Node Types

Nodes in the Filecoin network are primarily identified in terms of the services they provide. The type of node, therefore, depends on which services a node provides. A basic set of services in the Filecoin network include:

  • chain verification
  • storage market client
  • storage market provider
  • retrieval market client
  • retrieval market provider
  • storage mining

Any node participating in the Filecoin network should provide the chain verification service as a minimum. Depending on which extra services a node provides on top of chain verification, it gets the corresponding functionality and Node Type “label”.

Nodes can be realized with a repository (directory) in the host in a one-to-one relationship - that is, one repo belongs to a single node. That said, one host can implement multiple Filecoin nodes by having the corresponding repositories.

A Filecoin implementation can support the following subsystems, or types of nodes:

  • Chain Verifier Node: this is the minimum functionality that a node needs to have in order to participate in the Filecoin network. This type of node cannot play an active role in the network, unless it implements Client Node functionality, described below. A Chain Verifier Node must synchronise the chain (ChainSync) when it first joins the network to reach current consensus. From then on, the node must constantly be fetching any addition to the chain (i.e., receive the latest blocks) and validate them to reach consensus state.
  • Client Node: this type of node builds on top of the Chain Verifier Node and must be implemented by any application that is building on the Filecoin network. This can be thought of as the main infrastructure node (at least as far as interaction with the blockchain is concerned) of applications such as exchanges or decentralised storage applications building on Filecoin. The node should implement the storage market and retrieval market client services. The client node should interact with the Storage and Retrieval Markets and be able to do Data Transfers through the Data Transfer Module.
  • Retrieval Miner Node: this node type is extending the Chain Verifier Node to add retrieval miner functionality, that is, participate in the retrieval market. As such, this node type needs to implement the retrieval market provider service and be able to do Data Transfers through the Data Transfer Module.
  • Storage Miner Node: this type of node must implement all of the required functionality for validating, creating and adding blocks to extend the blockchain. It should implement the chain verification, storage mining and storage market provider services and be able to do Data Transfers through the Data Transfer Module.

Node Interface

The Lotus implementation of the Node Interface can be found here.

Chain Verifier Node

type ChainVerifierNode interface {
  FilecoinNode

  systems.Blockchain
}

The Lotus implementation of the Chain Verifier Node can be found here.

Client Node

type ClientNode struct {
  FilecoinNode

  systems.Blockchain
  markets.StorageMarketClient
  markets.RetrievalMarketClient
  markets.DataTransfers
}

The Lotus implementation of the Client Node can be found here.

Storage Miner Node

type StorageMinerNode interface {
  FilecoinNode

  systems.Blockchain
  systems.Mining
  markets.StorageMarketProvider
  markets.DataTransfers
}

The Lotus implementation of the Storage Miner Node can be found here.

Retrieval Miner Node

type RetrievalMinerNode interface {
  FilecoinNode

  blockchain.Blockchain
  markets.RetrievalMarketProvider
  markets.DataTransfers
}

Relayer Node

type RelayerNode interface {
  FilecoinNode

  blockchain.MessagePool
}

Node Configuration

The Lotus implementation of Filecoin Node configuration values can be found here.

Node Repository

The Filecoin node repository is simply local storage for system and chain data. It is an abstraction of the data which any functional Filecoin node needs to store locally in order to run correctly.

The repository is accessible to the node’s systems and subsystems and can be compartmentalized from the node’s FileStore.

The repository stores the node’s keys, the IPLD data structures of stateful objects as well as the node configuration settings.

The Lotus implementation of the FileStore Repository can be found here.

Key Store

The Key Store is a fundamental abstraction in any full Filecoin node used to store the keypairs associated with a given miner’s address (see actual definition further down) and distinct workers (should the miner choose to run multiple workers).

Node security depends in large part on keeping these keys secure. To that end we strongly recommend: 1) keeping keys separate from all subsystems, 2) using a separate key store to sign requests as required by other subsystems, and 3) keeping those keys that are not used as part of mining in cold storage.

Filecoin storage miners rely on three main components:

  • The storage miner actor address is uniquely assigned to a given storage miner actor upon calling registerMiner() in the Storage Power Consensus Subsystem. In effect, the storage miner does not have an address itself, but is rather identified by the address of the actor it is tied to. This is a unique identifier for a given storage miner to which its power and other keys will be associated. The actor value specifies the address of an already created miner actor.
  • The owner keypair is provided by the miner ahead of registration and its public key associated with the miner address. The owner keypair can be used to administer a miner and withdraw funds.
  • The worker keypair is the public key associated with the storage miner actor address. It can be chosen and changed by the miner. The worker keypair is used to sign blocks and may also be used to sign other messages. It must be a BLS keypair given its use as part of the Verifiable Random Function.

Multiple storage miner actors can share one owner public key or likewise a worker public key.

The process for changing the worker keypairs on-chain (i.e. the worker Key associated with a storage miner actor) is specified in Storage Miner Actor. Note that this is a two-step process. First, a miner stages a change by sending a message to the chain. Then, the miner confirms the key change after the randomness lookback time. Finally, the miner will begin signing blocks with the new key after an additional randomness lookback time. This delay exists to prevent adaptive key selection attacks.

Key security is of utmost importance in Filecoin, as is also the case with keys in every blockchain. Failure to securely store and use keys or exposure of private keys to adversaries can result in the adversary having access to the miner’s funds.

IPLD Store

InterPlanetary Linked Data (IPLD) is a set of libraries which allow for the interoperability of content-addressed data structures across different distributed systems and protocols. It provides a fundamental ‘common language’ to primitive cryptographic hashing, enabling data structures to be verifiably referenced and retrieved between two independent protocols. For example, a user can reference an IPFS directory in an Ethereum transaction or smart contract.

The IPLD Store of a Filecoin Node is local storage for hash-linked data.

IPLD is fundamentally comprised of three layers:

  • the Block Layer, which focuses on block formats and addressing, how blocks can advertise or self-describe their codec
  • the Data Model Layer, which defines a set of required types that need to be included in any implementation - discussed in more detail below.
  • the Schema Layer, which allows for extension of the Data Model to interact with more complex structures without the need for custom translation abstractions.

Further details about IPLD can be found in its specification.

The Data Model

At its core, IPLD defines a Data Model for representing data. The Data Model is designed for practical implementation across a wide variety of programming languages, while maintaining usability for content-addressed data and a broad range of generalized tools that interact with that data.

The Data Model includes a range of standard primitive types (or “kinds”), such as booleans, integers, strings, nulls and byte arrays, as well as two recursive types: lists and maps. Because IPLD is designed for content-addressed data, it also includes a “link” primitive in its Data Model. In practice, links use the CID specification. IPLD data is organized into “blocks”, where a block is represented by the raw, encoded data and its content-address, or CID. Every content-addressable chunk of data can be represented as a block, and together, blocks can form a coherent graph, or Merkle DAG.

Applications interact with IPLD via the Data Model, and IPLD handles marshalling and unmarshalling via a suite of codecs. IPLD codecs may support the complete Data Model or part of the Data Model. Two codecs that support the complete Data Model are DAG-CBOR and DAG-JSON. These codecs are respectively based on the CBOR and JSON serialization formats but include formalizations that allow them to encapsulate the IPLD Data Model (including its link type) and additional rules that create a strict mapping between any set of data and it’s respective content address (or hash digest). These rules include the mandating of particular ordering of keys when encoding maps, or the sizing of integer types when stored.

IPLD in Filecoin

IPLD is used in two ways in the Filecoin network:

  • All system datastructures are stored using DAG-CBOR (an IPLD codec). DAG-CBOR is a more strict subset of CBOR with a predefined tagging scheme, designed for storage, retrieval and traversal of hash-linked data DAGs. As compared to CBOR, DAG-CBOR can guarantee determinism.
  • Files and data stored on the Filecoin network are also stored using various IPLD codecs (not necessarily DAG-CBOR).

IPLD provides a consistent and coherent abstraction above data that allows Filecoin to build and interact with complex, multi-block data structures, such as HAMT and AMT. Filecoin uses the DAG-CBOR codec for the serialization and deserialization of its data structures and interacts with that data using the IPLD Data Model, upon which various tools are built. IPLD Selectors can also be used to address specific nodes within a linked data structure.

IpldStores

The Filecoin network relies primarily on two distinct IPLD GraphStores:

  • One ChainStore which stores the blockchain, including block headers, associated messages, etc.
  • One StateStore which stores the payload state from a given blockchain, or the stateTree resulting from all block messages in a given chain being applied to the genesis state by the Filecoin VM.

The ChainStore is downloaded by a node from their peers during the bootstrapping phase of Chain Sync and is stored by the node thereafter. It is updated on every new block reception, or if the node syncs to a new best chain.

The StateStore is computed through the execution of all block messages in a given ChainStore and is stored by the node thereafter. It is updated with every new incoming block’s processing by the VM Interpreter, and referenced accordingly by new blocks produced atop it in the block header’s ParentState field.

Network Interface

Filecoin nodes use several protocols of the libp2p networking stack for peer discovery, peer routing and block and message propagation. Libp2p is a modular networking stack for peer-to-peer networks. It includes several protocols and mechanisms to enable efficient, secure and resilient peer-to-peer communication. Libp2p nodes open connections with one another and mount different protocols or streams over the same connection. In the initial handshake, nodes exchange the protocols that each of them supports and all Filecoin related protocols will be mounted under /fil/... protocol identifiers.

The complete specification of libp2p can be found at https://github.com/libp2p/specs. Here is the list of libp2p protocols used by Filecoin.

  • Graphsync: Graphsync is a protocol to synchronize graphs across peers. It is used to reference, address, request and transfer blockchain and user data between Filecoin nodes. The draft specification of GraphSync provides more details on the concepts, the interfaces and the network messages used by GraphSync. There are no Filecoin-specific modifications to the protocol id.

  • Gossipsub: Block headers and messages are propagating through the Filecoin network using a gossip-based pubsub protocol acronymed GossipSub. As is traditionally the case with pubsub protocols, nodes subscribe to topics and receive messages published on those topics. When nodes receive messages from a topic they are subscribed to, they run a validation process and i) pass the message to the application, ii) forward the message further to nodes they know of being subscribed to the same topic. Furthermore, v1.1 version of GossipSub, which is the one used in Filecoin is enhanced with security mechanisms that make the protocol resilient against security attacks. The GossipSub Specification provides all the protocol details pertaining to its design and implementation, as well as specific settings for the protocols parameters. There have been no Filecoin-specific modifications to the protocol id. However the topic identifiers MUST be of the form fil/blocks/<network-name> and fil/msgs/<network-name>

  • Kademlia DHT: The Kademlia DHT is a distributed hash table with a logarithmic bound on the maximum number of lookups for a particular node. In the Filecoin network, the Kademlia DHT is used primarily for peer discovery and peer routing. In particular, when a node wants to store data in the Filecoin network, they get a list of miners and their node information. This node information includes (among other things) the PeerID of the miner. In order to connect to the miner and exchange data, the node that wants to store data in the network has to find the Multiaddress of the miner, which they do by querying the DHT. The libp2p Kad DHT Specification provides implementation details of the DHT structure. For the Filecoin network, the protocol id must be of the form fil/<network-name>/kad/1.0.0.

  • Bootstrap List: This is a list of nodes that a new node attempts to connect to upon joining the network. The list of bootstrap nodes and their addresses are defined by the users (i.e., applications).

  • Peer Exchange: This protocol is the realisation of the peer discovery process discussed above at Kademlia DHT. It enables peers to find information and addresses of other peers in the network by interfacing with the DHT and create and issue queries for the peers they want to connect to.

Clock

Filecoin assumes weak clock synchrony amongst participants in the system. That is, the system relies on participants having access to a globally synchronized clock (tolerating some bounded offset).

Filecoin relies on this system clock in order to secure consensus. Specifically, the clock is necessary to support validation rules that prevent block producers from mining blocks with a future timestamp and running leader elections more frequently than the protocol allows.

Clock uses

The Filecoin system clock is used:

  • by syncing nodes to validate that incoming blocks were mined in the appropriate epoch given their timestamp (see Block Validation). This is possible because the system clock maps all times to a unique epoch number totally determined by the start time in the genesis block.
  • by syncing nodes to drop blocks coming from a future epoch
  • by mining nodes to maintain protocol liveness by allowing participants to try leader election in the next round if no one has produced a block in the current round (see Storage Power Consensus).

In order to allow miners to do the above, the system clock must:

  1. Have low enough offset relative to other nodes so that blocks are not mined in epochs considered future epochs from the perspective of other nodes (those blocks should not be validated until the proper epoch/time as per validation rules).
  2. Set epoch number on node initialization equal to epoch = Floor[(current_time - genesis_time) / epoch_time]

It is expected that other subsystems will register to a NewRound() event from the clock subsystem.

Clock Requirements

Clocks used as part of the Filecoin protocol should be kept in sync, with offset less than 1 second so as to enable appropriate validation.

Computer-grade crystals can be expected to deviate by 1ppm (i.e. 1 microsecond every second, or 0.6 seconds per week). Therefore, in order to respect the requirement above:

  • Nodes SHOULD run an NTP daemon (e.g. timesyncd, ntpd, chronyd) to keep their clocks synchronized to one or more reliable external references.
    • We recommend the following sources:
  • Larger mining operations MAY consider using local NTP/PTP servers with GPS references and/or frequency-stable external clocks for improved timekeeping.

Mining operations have a strong incentive to prevent their clock skewing ahead more than one epoch to keep their block submissions from being rejected. Likewise they have an incentive to prevent their clocks skewing behind more than one epoch to avoid partitioning themselves off from the synchronized nodes in the network.

Files & Data

Filecoin’s primary aim is to store client’s Files and Data. This section details data structures and tooling related to working with files, chunking, encoding, graph representations, Pieces, storage abstractions, and more.

File

// Path is an opaque locator for a file (e.g. in a unix-style filesystem).
type Path string

// File is a variable length data container.
// The File interface is modeled after a unix-style file, but abstracts the
// underlying storage system.
type File interface {
    Path()   Path
    Size()   int
    Close()  error

    // Read reads from File into buf, starting at offset, and for size bytes.
    Read(offset int, size int, buf Bytes) struct {size int, e error}

    // Write writes from buf into File, starting at offset, and for size bytes.
    Write(offset int, size int, buf Bytes) struct {size int, e error}
}

FileStore - Local Storage for Files

The FileStore is an abstraction used to refer to any underlying system or device that Filecoin will store its data to. It is based on Unix filesystem semantics, and includes the notion of Paths. This abstraction is here in order to make sure Filecoin implementations make it easy for end-users to replace the underlying storage system with whatever suits their needs. The simplest version of FileStore is just the host operating system’s file system.

// FileStore is an object that can store and retrieve files by path.
type FileStore struct {
    Open(p Path)           union {f File, e error}
    Create(p Path)         union {f File, e error}
    Store(p Path, f File)  error
    Delete(p Path)         error

    // maybe add:
    // Copy(SrcPath, DstPath)
}
Varying user needs

Filecoin user needs vary significantly, and many users – especially miners – will implement complex storage architectures underneath and around Filecoin. The FileStore abstraction is here to make it easy for these varying needs to be easy to satisfy. All file and sector local data storage in the Filecoin Protocol is defined in terms of this FileStore interface, which makes it easy for implementations to make swappable, and for end-users to swap out with their system of choice.

Implementation examples

The FileStore interface may be implemented by many kinds of backing data storage systems. For example:

  • The host Operating System file system
  • Any Unix/Posix file system
  • RAID-backed file systems
  • Networked of distributed file systems (NFS, HDFS, etc)
  • IPFS
  • Databases
  • NAS systems
  • Raw serial or block devices
  • Raw hard drives (hdd sectors, etc)

Implementations SHOULD implement support for the host OS file system. Implementations MAY implement support for other storage systems.

The Filecoin Piece

The Filecoin Piece is the main unit of negotiation for data that users store on the Filecoin network. The Filecoin Piece is not a unit of storage, it is not of a specific size, but is upper-bounded by the size of the Sector. A Filecoin Piece can be of any size, but if a Piece is larger than the size of a Sector that the miner supports it has to be split into more Pieces so that each Piece fits into a Sector.

A Piece is an object that represents a whole or part of a File, and is used by Storage Clients and Storage Miners in Deals. Storage Clients hire Storage Miners to store Pieces.

The Piece data structure is designed for proving storage of arbitrary IPLD graphs and client data. This diagram shows the detailed composition of a Piece and its proving tree, including both full and bandwidth-optimized Piece data structures.

Pieces, Proving Trees, and Piece Data Structures

Data Representation

It is important to highlight that data submitted to the Filecoin network go through several transformations before they come to the format at which the StorageProvider stores it.

Below is the process followed from the point a user starts preparing a file to store in Filecoin to the point that the provider produces all the identifiers of Pieces stored in a Sector.

The first three steps take place on the client side.

  1. When a client wants to store a file in the Filecoin network, they start by producing the IPLD DAG of the file. The hash that represents the root node of the DAG is an IPFS-style CID, called Payload CID.

  2. In order to make a Filecoin Piece, the IPLD DAG is serialised into a “Content-Addressable aRchive” (.car) file, which is in raw bytes format. A CAR file is an opaque blob of data that packs together and transfers IPLD nodes. The Payload CID is common between the CAR’ed and un-CAR’ed constructions. This helps later during data retrieval, when data is transferred between the storage client and the storage provider as we discuss later.

  3. The resulting .car file is padded with extra zero bits in order for the file to make a binary Merkle tree. To achieve a clean binary Merkle Tree the .car file size has to be in some power of two (^2) size. A padding process, called Fr32 padding, which adds two (2) zero bits to every 254 out of every 256 bits is applied to the input file. At the next step, the padding process takes the output of the Fr32 padding process and finds the size above it that makes for a power of two size. This gap between the result of the Fr32 padding and the next power of two size is padded with zeros.

In order to justify the reasoning behind these steps, it is important to understand the overall negotiation process between the StorageClient and a StorageProvider. The piece CID or CommP is what is included in the deal that the client negotiates and agrees with the storage provider. When the deal is agreed, the client sends the file to the provider (using GraphSync). The provider has to construct the CAR file out of the file received and derive the Piece CID on their side. In order to avoid the client sending a different file to the one agreed, the Piece CID that the provider generates has to be the same as the one included in the deal negotiated earlier.

The following steps take place on the StorageProvider side (apart from step 4 that can also take place at the client side).

  1. Once the StorageProvider receives the file from the client, they calculate the Merkle root out of the hashes of the Piece (padded .car file). The resulting root of the clean binary Merkle tree is the Piece CID. This is also referred to as CommP or Piece Commitment and as mentioned earlier, has to be the same with the one included in the deal.

  2. The Piece is included in a Sector together with data from other deals. The StorageProvider then calculates Merkle root for all the Pieces inside the Sector. The root of this tree is CommD (aka Commitment of Data or UnsealedSectorCID).

  3. The StorageProvider is then sealing the sector and the root of the resulting Merkle root is the CommRLast.

  4. Proof of Replication (PoRep), SDR in particular, generates another Merkle root hash called CommC, as an attestation that replication of the data whose commitment is CommD has been performed correctly.

  5. Finally, CommR (or Commitment of Replication) is the hash of CommC || CommRLast.

IMPORTANT NOTES:

  • Fr32 is a 32-bit representation of a field element (which, in our case, is the arithmetic field of BLS12-381). To be well-formed, a value of type Fr32 must actually fit within that field, but this is not enforced by the type system. It is an invariant which must be perserved by correct usage. In the case of so-called Fr32 padding, two zero bits are inserted ‘after’ a number requiring at most 254 bits to represent. This guarantees that the result will be Fr32, regardless of the value of the initial 254 bits. This is a ‘conservative’ technique, since for some initial values, only one bit of zero-padding would actually be required.
  • Steps 2 and 3 above are specific to the Lotus implementation. The same outcome can be achieved in different ways, e.g., without using Fr32 bit-padding. However, any implementation has to make sure that the initial IPLD DAG is serialised and padded so that it gives a clean binary tree, and therefore, calculating the Merkle root out of the resulting blob of data gives the same Piece CID. As long as this is the case, implementations can deviate from the first three steps above.
  • Finally, it is important to add a note related to the Payload CID (discussed in the first two steps above) and the data retrieval process. The retrieval deal is negotiated on the basis of the Payload CID. When the retrieval deal is agreed, the retrieval miner starts sending the unsealed and “un-CAR’ed” file to the client. The transfer starts from the root node of the IPLD Merkle Tree and in this way the client can validate the Payload CID from the beginning of the transfer and verify that the file they are receiving is the file they negotiated in the deal and not random bits.

PieceStore

The PieceStore module allows for storage and retrieval of Pieces from some local storage. The piecestore’s main goal is to help the storage and retrieval market modules to find where sealed data lives inside of sectors. The storage market writes the data, and retrieval market reads it in order to send out to retrieval clients.

The implementation of the PieceStore module can be found here.

Data Transfer in Filecoin

The Data Transfer Protocol is a protocol for transferring all or part of a Piece across the network when a deal is made. The overall goal for the data transfer module is for it to be an abstraction of the underlying transport medium over which data is transferred between different parties in the Filecoin network. Currently, the underlying medium or protocol used to actually do the data transfer is GraphSync. As such, the Data Transfer Protocol can be thought of as a negotiation protocol.

The Data Transfer Protocol is used both for Storage and for Retrieval Deals. In both cases, the data transfer request is initiated by the client. The primary reason for this is that clients will more often than not be behind NATs and therefore, it is more convenient to start any data transfer from their side. In the case of Storage Deals the data transfer request is initiated as a push request to send data to the storage provider. In the case of Retrieval Deals the data transfer request is initiated as a pull request to retrieve data by the storage provider.

The request to initiate a data transfer includes a voucher or token (none to be confused with the Payment Channel voucher) that points to a specific deal that the two parties have agreed to before. This is so that the storage provider can identify and link the request to a deal it has agreed to and not disregard the request. As described below the case might be slightly different for retrieval deals, where both a deal proposal and a data transfer request can be sent at once.

Modules

This diagram shows how Data Transfer and its modules fit into the picture with the Storage and Retrieval Markets. In particular, note how the Data Transfer Request Validators from the markets are plugged into the Data Transfer module, but their code belongs in the Markets system.

Data Transfer

Terminology

  • Push Request: A request to send data to the other party - normally initiated by the client and primarily in case of a Storage Deal.
  • Pull Request: A request to have the other party send data - normally initiated by the client and primarily in case of a Retrieval Deal.
  • Requestor: The party that initiates the data transfer request (whether Push or Pull) - normally the client, at least as currently implemented in Filecoin, to overcome NAT-traversal problems.
  • Responder: The party that receives the data transfer request - normally the storage provider.
  • Data Transfer Voucher or Token: A wrapper around storage- or retrieval-related data that can identify and validate the transfer request to the other party.
  • Request Validator: The data transfer module only initiates a transfer when the responder can validate that the request is tied directly to either an existing storage or retrieval deal. Validation is not performed by the data transfer module itself. Instead, a request validator inspects the data transfer voucher to determine whether to respond to the request or disregard the request.
  • Transporter: Once a request is negotiated and validated, the actual transfer is managed by a transporter on both sides. The transporter is part of the data transfer module but is isolated from the negotiation process. It has access to an underlying verifiable transport protocol and uses it to send data and track progress.
  • Subscriber: An external component that monitors progress of a data transfer by subscribing to data transfer events, such as progress or completion.
  • GraphSync: The default underlying transport protocol used by the Transporter. The full graphsync specification can be found here

Request Phases

There are two basic phases to any data transfer:

  1. Negotiation: the requestor and responder agree to the transfer by validating it with the data transfer voucher.
  2. Transfer: once the negotiation phase is complete, the data is actually transferred. The default protocol used to do the transfer is Graphsync.

Note that the Negotiation and Transfer stages can occur in separate round trips, or potentially the same round trip, where the requesting party implicitly agrees by sending the request, and the responding party can agree and immediately send or receive data. Whether the process is taking place in a single or multiple round-trips depends in part on whether the request is a push request (storage deal) or a pull request (retrieval deal), and on whether the data transfer negotiation process is able to piggy back on the underlying transport mechanism. In the case of GraphSync as transport mechanism, data transfer requests can piggy back as an extension to the GraphSync protocol using GraphSync’s built-in extensibility. So, only a single round trip is required for Pull Requests. However, because Graphsync is a request/response protocol with no direct support for push type requests, in the Push case, negotiation happens in a seperate request over data transfer’s own libp2p protocol /fil/datatransfer/1.0.0. Other future transport mechanisms might handle both Push and Pull, either, or neither as a single round trip. Upon receiving a data transfer request, the data transfer module does the decoding the voucher and delivers it to the request validators. In storage deals, the request validator checks if the deal included is one that the recipient has agreed to before. For retrieval deals the request includes the proposal for the retrieval deal itself. As long as request validator accepts the deal proposal, everything is done at once as a single round-trip.

It is worth noting that in the case of retrieval the provider can accept the deal and the data transfer request, but then pause the retrieval itself in order to carry out the unsealing process. The storage provider has to unseal all of the requested data before initiating the actual data transfer. Furthermore, the storage provider has the option of pausing the retrieval flow before starting the unsealing process in order to ask for an unsealing payment request. Storage providers have the option to request for this payment in order to cover unsealing computation costs and avoid falling victims of misbehaving clients.

Example Flows

Push Flow

Data Transfer - Push Flow

  1. A requestor initiates a Push transfer when it wants to send data to another party.
  2. The requestors’ data transfer module will send a push request to the responder along with the data transfer voucher.
  3. The responder’s data transfer module validates the data transfer request via the Validator provided as a dependency by the responder.
  4. The responder’s data transfer module initiates the transfer by making a GraphSync request.
  5. The requestor receives the GraphSync request, verifies that it recognises the data transfer and begins sending data.
  6. The responder receives data and can produce an indication of progress.
  7. The responder completes receiving data, and notifies any listeners.

The push flow is ideal for storage deals, where the client initiates the data transfer straightaway once the provider indicates their intent to accept and publish the client’s deal proposal.

Pull Flow - Single Round Trip

Data Transfer - Single Round Trip Pull Flow

  1. A requestor initiates a Pull transfer when it wants to receive data from another party.
  2. The requestor’s data transfer module initiates the transfer by making a pull request embedded in the GraphSync request to the responder. The request includes the data transfer voucher.
  3. The responder receives the GraphSync request, and forwards the data transfer request to the data transfer module.
  4. The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder.
  5. The responder accepts the GraphSync request and sends the accepted response along with the data transfer level acceptance response.
  6. The requestor receives data and can produce an indication of progress. This timing of this step comes later in time, after the storage provider has finished unsealing the data.
  7. The requestor completes receiving data, and notifies any listeners.

Protocol

A data transfer CAN be negotiated over the network via the Data Transfer Protocol, a libp2p protocol type.

Using the Data Transfer Protocol as an independent libp2p communication mechanism is not a hard requirement – as long as both parties have an implementation of the Data Transfer Subsystem that can talk to the other, any transport mechanism (including offline mechanisms) is acceptable.

Data Structures

package datatransfer

import (
	"fmt"
	"time"

	"github.com/ipfs/go-cid"
	"github.com/ipld/go-ipld-prime"
	"github.com/ipld/go-ipld-prime/datamodel"
	"github.com/libp2p/go-libp2p/core/peer"
	cbg "github.com/whyrusleeping/cbor-gen"
)

//go:generate cbor-gen-for ChannelID ChannelStages ChannelStage Log

// TypeIdentifier is a unique string identifier for a type of encodable object in a
// registry
type TypeIdentifier string

// EmptyTypeIdentifier means there is no voucher present
const EmptyTypeIdentifier = TypeIdentifier("")

// TypedVoucher is a voucher or voucher result in IPLD form and an associated
// type identifier for that voucher or voucher result
type TypedVoucher struct {
	Voucher datamodel.Node
	Type    TypeIdentifier
}

// Equals is a utility to compare that two TypedVouchers are the same - both type
// and the voucher's IPLD content
func (tv1 TypedVoucher) Equals(tv2 TypedVoucher) bool {
	return tv1.Type == tv2.Type && ipld.DeepEqual(tv1.Voucher, tv2.Voucher)
}

// TransferID is an identifier for a data transfer, shared between
// request/responder and unique to the requester
type TransferID uint64

// ChannelID is a unique identifier for a channel, distinct by both the other
// party's peer ID + the transfer ID
type ChannelID struct {
	Initiator peer.ID
	Responder peer.ID
	ID        TransferID
}

func (c ChannelID) String() string {
	return fmt.Sprintf("%s-%s-%d", c.Initiator, c.Responder, c.ID)
}

// OtherParty returns the peer on the other side of the request, depending
// on whether this peer is the initiator or responder
func (c ChannelID) OtherParty(thisPeer peer.ID) peer.ID {
	if thisPeer == c.Initiator {
		return c.Responder
	}
	return c.Initiator
}

// Channel represents all the parameters for a single data transfer
type Channel interface {
	// TransferID returns the transfer id for this channel
	TransferID() TransferID

	// BaseCID returns the CID that is at the root of this data transfer
	BaseCID() cid.Cid

	// Selector returns the IPLD selector for this data transfer (represented as
	// an IPLD node)
	Selector() datamodel.Node

	// Voucher returns the initial voucher for this data transfer
	Voucher() TypedVoucher

	// Sender returns the peer id for the node that is sending data
	Sender() peer.ID

	// Recipient returns the peer id for the node that is receiving data
	Recipient() peer.ID

	// TotalSize returns the total size for the data being transferred
	TotalSize() uint64

	// IsPull returns whether this is a pull request
	IsPull() bool

	// ChannelID returns the ChannelID for this request
	ChannelID() ChannelID

	// OtherPeer returns the counter party peer for this channel
	OtherPeer() peer.ID
}

// ChannelState is channel parameters plus it's current state
type ChannelState interface {
	Channel

	// SelfPeer returns the peer this channel belongs to
	SelfPeer() peer.ID

	// Status is the current status of this channel
	Status() Status

	// Sent returns the number of bytes sent
	Sent() uint64

	// Received returns the number of bytes received
	Received() uint64

	// Message offers additional information about the current status
	Message() string

	// Vouchers returns all vouchers sent on this channel
	Vouchers() []TypedVoucher

	// VoucherResults are results of vouchers sent on the channel
	VoucherResults() []TypedVoucher

	// LastVoucher returns the last voucher sent on the channel
	LastVoucher() TypedVoucher

	// LastVoucherResult returns the last voucher result sent on the channel
	LastVoucherResult() TypedVoucher

	// ReceivedCidsTotal returns the number of (non-unique) cids received so far
	// on the channel - note that a block can exist in more than one place in the DAG
	ReceivedCidsTotal() int64

	// QueuedCidsTotal returns the number of (non-unique) cids queued so far
	// on the channel - note that a block can exist in more than one place in the DAG
	QueuedCidsTotal() int64

	// SentCidsTotal returns the number of (non-unique) cids sent so far
	// on the channel - note that a block can exist in more than one place in the DAG
	SentCidsTotal() int64

	// Queued returns the number of bytes read from the node and queued for sending
	Queued() uint64

	// DataLimit is the maximum data that can be transferred on this channel before
	// revalidation. 0 indicates no limit.
	DataLimit() uint64

	// RequiresFinalization indicates at the end of the transfer, the channel should
	// be left open for a final settlement
	RequiresFinalization() bool

	// InitiatorPaused indicates whether the initiator of this channel is in a paused state
	InitiatorPaused() bool

	// ResponderPaused indicates whether the responder of this channel is in a paused state
	ResponderPaused() bool

	// BothPaused indicates both sides of the transfer have paused the transfer
	BothPaused() bool

	// SelfPaused indicates whether the local peer for this channel is in a paused state
	SelfPaused() bool

	// Stages returns the timeline of events this data transfer has gone through,
	// for observability purposes.
	//
	// It is unsafe for the caller to modify the return value, and changes
	// may not be persisted. It should be treated as immutable.
	Stages() *ChannelStages
}

// ChannelStages captures a timeline of the progress of a data transfer channel,
// grouped by stages.
//
// EXPERIMENTAL; subject to change.
type ChannelStages struct {
	// Stages contains an entry for every stage the channel has gone through.
	// Each stage then contains logs.
	Stages []*ChannelStage
}

// ChannelStage traces the execution of a data transfer channel stage.
//
// EXPERIMENTAL; subject to change.
type ChannelStage struct {
	// Human-readable fields.
	// TODO: these _will_ need to be converted to canonical representations, so
	//  they are machine readable.
	Name        string
	Description string

	// Timestamps.
	// TODO: may be worth adding an exit timestamp. It _could_ be inferred from
	//  the start of the next stage, or from the timestamp of the last log line
	//  if this is a terminal stage. But that's non-determistic and it relies on
	//  assumptions.
	CreatedTime cbg.CborTime
	UpdatedTime cbg.CborTime

	// Logs contains a detailed timeline of events that occurred inside
	// this stage.
	Logs []*Log
}

// Log represents a point-in-time event that occurred inside a channel stage.
//
// EXPERIMENTAL; subject to change.
type Log struct {
	// Log is a human readable message.
	//
	// TODO: this _may_ need to be converted to a canonical data model so it
	//  is machine-readable.
	Log string

	UpdatedTime cbg.CborTime
}

// AddLog adds a log to the specified stage, creating the stage if
// it doesn't exist yet.
//
// EXPERIMENTAL; subject to change.
func (cs *ChannelStages) AddLog(stage, msg string) {
	if cs == nil {
		return
	}

	now := curTime()
	st := cs.GetStage(stage)
	if st == nil {
		st = &ChannelStage{
			CreatedTime: now,
		}
		cs.Stages = append(cs.Stages, st)
	}

	st.Name = stage
	st.UpdatedTime = now
	if msg != "" && (len(st.Logs) == 0 || st.Logs[len(st.Logs)-1].Log != msg) {
		// only add the log if it's not a duplicate.
		st.Logs = append(st.Logs, &Log{msg, now})
	}
}

// GetStage returns the ChannelStage object for a named stage, or nil if not found.
//
// TODO: the input should be a strongly-typed enum instead of a free-form string.
// TODO: drop Get from GetStage to make this code more idiomatic. Return a
//
//	second ok boolean to make it even more idiomatic.
//
// EXPERIMENTAL; subject to change.
func (cs *ChannelStages) GetStage(stage string) *ChannelStage {
	if cs == nil {
		return nil
	}

	for _, s := range cs.Stages {
		if s.Name == stage {
			return s
		}
	}

	return nil
}

func curTime() cbg.CborTime {
	now := time.Now()
	return cbg.CborTime(time.Unix(0, now.UnixNano()).UTC())
}
package datatransfer

import "github.com/filecoin-project/go-statemachine/fsm"

// Status is the status of transfer for a given channel
type Status uint64

const (
	// Requested means a data transfer was requested by has not yet been approved
	Requested Status = iota

	// Ongoing means the data transfer is in progress
	Ongoing

	// TransferFinished indicates the initiator is done sending/receiving
	// data but is awaiting confirmation from the responder
	TransferFinished

	// ResponderCompleted indicates the initiator received a message from the
	// responder that it's completed
	ResponderCompleted

	// Finalizing means the responder is awaiting a final message from the initator to
	// consider the transfer done
	Finalizing

	// Completing just means we have some final cleanup for a completed request
	Completing

	// Completed means the data transfer is completed successfully
	Completed

	// Failing just means we have some final cleanup for a failed request
	Failing

	// Failed means the data transfer failed
	Failed

	// Cancelling just means we have some final cleanup for a cancelled request
	Cancelling

	// Cancelled means the data transfer ended prematurely
	Cancelled

	// DEPRECATED: Use InitiatorPaused() method on ChannelState
	InitiatorPaused

	// DEPRECATED: Use ResponderPaused() method on ChannelState
	ResponderPaused

	// DEPRECATED: Use BothPaused() method on ChannelState
	BothPaused

	// ResponderFinalizing is a unique state where the responder is awaiting a final voucher
	ResponderFinalizing

	// ResponderFinalizingTransferFinished is a unique state where the responder is awaiting a final voucher
	// and we have received all data
	ResponderFinalizingTransferFinished

	// ChannelNotFoundError means the searched for data transfer does not exist
	ChannelNotFoundError

	// Queued indicates a data transfer request has been accepted, but is not actively transfering yet
	Queued

	// AwaitingAcceptance indicates a transfer request is actively being processed by the transport
	// even if the remote has not yet responded that it's accepted the transfer. Such a state can
	// occur, for example, in a requestor-initiated transfer that starts processing prior to receiving
	// acceptance from the server.
	AwaitingAcceptance
)

type statusList []Status

func (sl statusList) Contains(s Status) bool {
	for _, ts := range sl {
		if ts == s {
			return true
		}
	}
	return false
}

func (sl statusList) AsFSMStates() []fsm.StateKey {
	sk := make([]fsm.StateKey, 0, len(sl))
	for _, s := range sl {
		sk = append(sk, s)
	}
	return sk
}

var NotAcceptedStates = statusList{
	Requested,
	AwaitingAcceptance,
	Cancelled,
	Cancelling,
	Failed,
	Failing,
	ChannelNotFoundError}

func (s Status) IsAccepted() bool {
	return !NotAcceptedStates.Contains(s)
}
func (s Status) String() string {
	return Statuses[s]
}

var FinalizationStatuses = statusList{Finalizing, Completed, Completing}

func (s Status) InFinalization() bool {
	return FinalizationStatuses.Contains(s)
}

var TransferCompleteStates = statusList{
	TransferFinished,
	ResponderFinalizingTransferFinished,
	Finalizing,
	Completed,
	Completing,
	Failing,
	Failed,
	Cancelling,
	Cancelled,
	ChannelNotFoundError,
}

func (s Status) TransferComplete() bool {
	return TransferCompleteStates.Contains(s)
}

var TransferringStates = statusList{
	Ongoing,
	ResponderCompleted,
	ResponderFinalizing,
	AwaitingAcceptance,
}

func (s Status) Transferring() bool {
	return TransferringStates.Contains(s)
}

// Statuses are human readable names for data transfer states
var Statuses = map[Status]string{
	// Requested means a data transfer was requested by has not yet been approved
	Requested:                           "Requested",
	Ongoing:                             "Ongoing",
	TransferFinished:                    "TransferFinished",
	ResponderCompleted:                  "ResponderCompleted",
	Finalizing:                          "Finalizing",
	Completing:                          "Completing",
	Completed:                           "Completed",
	Failing:                             "Failing",
	Failed:                              "Failed",
	Cancelling:                          "Cancelling",
	Cancelled:                           "Cancelled",
	InitiatorPaused:                     "InitiatorPaused",
	ResponderPaused:                     "ResponderPaused",
	BothPaused:                          "BothPaused",
	ResponderFinalizing:                 "ResponderFinalizing",
	ResponderFinalizingTransferFinished: "ResponderFinalizingTransferFinished",
	ChannelNotFoundError:                "ChannelNotFoundError",
	Queued:                              "Queued",
	AwaitingAcceptance:                  "AwaitingAcceptance",
}
Manager is the core interface presented by all implementations of of the data transfer sub system
type Manager interface {

	// Start initializes data transfer processing
	Start(ctx context.Context) error

	// OnReady registers a listener for when the data transfer comes on line
	OnReady(ReadyFunc)

	// Stop terminates all data transfers and ends processing
	Stop(ctx context.Context) error

	// RegisterVoucherType registers a validator for the given voucher type
	// will error if voucher type does not implement voucher
	// or if there is a voucher type registered with an identical identifier
	RegisterVoucherType(voucherType TypeIdentifier, validator RequestValidator) error

	// RegisterTransportConfigurer registers the given transport configurer to be run on requests with the given voucher
	// type
	RegisterTransportConfigurer(voucherType TypeIdentifier, configurer TransportConfigurer) error

	// open a data transfer that will send data to the recipient peer and
	// transfer parts of the piece that match the selector
	OpenPushDataChannel(ctx context.Context, to peer.ID, voucher TypedVoucher, baseCid cid.Cid, selector datamodel.Node, options ...TransferOption) (ChannelID, error)

	// open a data transfer that will request data from the sending peer and
	// transfer parts of the piece that match the selector
	OpenPullDataChannel(ctx context.Context, to peer.ID, voucher TypedVoucher, baseCid cid.Cid, selector datamodel.Node, options ...TransferOption) (ChannelID, error)

	// send an intermediate voucher as needed when the receiver sends a request for revalidation
	SendVoucher(ctx context.Context, chid ChannelID, voucher TypedVoucher) error

	// send information from the responder to update the initiator on the state of their voucher
	SendVoucherResult(ctx context.Context, chid ChannelID, voucherResult TypedVoucher) error

	// Update the validation status for a given channel, to change data limits, finalization, accepted status, and pause state
	// and send new voucher results as
	UpdateValidationStatus(ctx context.Context, chid ChannelID, validationResult ValidationResult) error

	// close an open channel (effectively a cancel)
	CloseDataTransferChannel(ctx context.Context, chid ChannelID) error

	// pause a data transfer channel (only allowed if transport supports it)
	PauseDataTransferChannel(ctx context.Context, chid ChannelID) error

	// resume a data transfer channel (only allowed if transport supports it)
	ResumeDataTransferChannel(ctx context.Context, chid ChannelID) error

	// get status of a transfer
	TransferChannelStatus(ctx context.Context, x ChannelID) Status

	// get channel state
	ChannelState(ctx context.Context, chid ChannelID) (ChannelState, error)

	// get notified when certain types of events happen
	SubscribeToEvents(subscriber Subscriber) Unsubscribe

	// get all in progress transfers
	InProgressChannels(ctx context.Context) (map[ChannelID]ChannelState, error)

	// RestartDataTransferChannel restarts an existing data transfer channel
	RestartDataTransferChannel(ctx context.Context, chid ChannelID) error
}

Data Formats and Serialization

Filecoin seeks to make use of as few data formats as needed, with well-specced serialization rules to better protocol security through simplicity and enable interoperability amongst implementations of the Filecoin protocol.

Read more on design considerations here for CBOR-usage and here for int types in Filecoin.

Data Formats

Filecoin in-memory data types are mostly straightforward. Implementations should support two integer types: Int (meaning native 64-bit integer), and BigInt (meaning arbitrary length) and avoid dealing with floating-point numbers to minimize interoperability issues across programming languages and implementations.

You can also read more on data formats as part of randomness generation in the Filecoin protocol.

Serialization

Data Serialization in Filecoin ensures a consistent format for serializing in-memory data for transfer in-flight and in-storage. Serialization is critical to protocol security and interoperability across implementations of the Filecoin protocol, enabling consistent state updates across Filecoin nodes.

All data structures in Filecoin are CBOR-tuple encoded. That is, any data structures used in the Filecoin system (structs in this spec) should be serialized as CBOR-arrays with items corresponding to the data structure fields in their order of declaration.

You can find the encoding structure for major data types in CBOR here.

For illustration, an in-memory map would be represented as a CBOR-array of the keys and values listed in some pre-determined order. A near-term update to the serialization format will involve tagging fields appropriately to ensure appropriate serialization/deserialization as the protocol evolves.

Virtual Machine

An Actor in the Filecoin Blockchain is the equivalent of the smart contract in the Ethereum Virtual Machine.

The Filecoin Virtual Machine (VM) is the system component that is in charge of execution of all actors code. Execution of actors on the Filecoin VM (i.e., on-chain executions) incur a gas cost.

Any operation applied (i.e., executed) on the Filecoin VM produces an output in the form of a State Tree (discussed below). The latest State Tree is the current source of truth in the Filecoin Blockchain. The State Tree is identified by a CID, which is stored in the IPLD store.

VM Actor Interface

As mentioned above, Actors are the Filecoin equivalent of smart contracts in the Ethereum Virtual Machine. As such, Actors are very central components of the system. Any change to the current state of the Filecoin blockchain has to be triggered through an actor method invocation.

This sub-section describes the interface between Actors and the Filecoin Virtual Machine. This means that most of what is described below does not strictly belong to the VM. Instead it is logic that sits on the interface between the VM and Actors logic.

There are eleven (11) types of builtin Actors in total, not all of which interact with the VM. Some Actors do not invoke changes to the StateTree of the blockchain and therefore, do not need to have an interface to the VM. We discuss the details of all System Actors later on in the System Actors subsection.

The actor address is a stable address generated by hashing the sender’s public key and a creation nonce. It should be stable across chain re-organizations. The actor ID address on the other hand, is an auto-incrementing address that is compact but can change in case of chain re-organizations. That being said, after being created, actors should use an actor address.

package builtin

import (
	addr "github.com/filecoin-project/go-address"
)

// Addresses for singleton system actors.
var (
	// Distinguished AccountActor that is the source of system implicit messages.
	SystemActorAddr           = mustMakeAddress(0)
	InitActorAddr             = mustMakeAddress(1)
	RewardActorAddr           = mustMakeAddress(2)
	CronActorAddr             = mustMakeAddress(3)
	StoragePowerActorAddr     = mustMakeAddress(4)
	StorageMarketActorAddr    = mustMakeAddress(5)
	VerifiedRegistryActorAddr = mustMakeAddress(6)
	// Distinguished AccountActor that is the destination of all burnt funds.
	BurntFundsActorAddr = mustMakeAddress(99)
)

const FirstNonSingletonActorId = 100

func mustMakeAddress(id uint64) addr.Address {
	address, err := addr.NewIDAddress(id)
	if err != nil {
		panic(err)
	}
	return address
}

The ActorState structure is composed of the actor’s balance, in terms of tokens held by this actor, as well as a group of state methods used to query, inspect and interact with chain state.

State Tree

The State Tree is the output of the execution of any operation applied on the Filecoin Blockchain. The on-chain (i.e., VM) state data structure is a map (in the form of a Hash Array Mapped Trie - HAMT) that binds addresses to actor states. The current State Tree function is called by the VM upon every actor method invocation.

StateTree stores actors state by their ID.
type StateTree struct {
	root        adt.Map
	version     types.StateTreeVersion
	info        cid.Cid
	Store       cbor.IpldStore
	lookupIDFun func(address.Address) (address.Address, error)

	snaps *stateSnaps
}

VM Message - Actor Method Invocation

A message is the unit of communication between two actors, and thus the primitive cause of changes in state. A message combines:

  • a token amount to be transferred from the sender to the receiver, and
  • a method with parameters to be invoked on the receiver (optional/where applicable).

Actor code may send additional messages to other actors while processing a received message. Messages are processed synchronously, that is, an actor waits for a sent message to complete before resuming control.

The processing of a message consumes units of computation and storage, both of which are denominated in gas. A message’s gas limit provides an upper bound on the computation required to process it. The sender of a message pays for the gas units consumed by a message’s execution (including all nested messages) at a gas price they determine. A block producer chooses which messages to include in a block and is rewarded according to each message’s gas price and consumption, forming a market.

Message syntax validation

A syntactically invalid message must not be transmitted, retained in a message pool, or included in a block. If an invalid message is received, it should be dropped and not propagated further.

When transmitted individually (before inclusion in a block), a message is packaged as SignedMessage, regardless of signature scheme used. A valid signed message has a total serialized size no greater than message.MessageMaxSize.

type SignedMessage struct {
	Message   Message
	Signature crypto.Signature
}

A syntactically valid UnsignedMessage:

  • has a well-formed, non-empty To address,
  • has a well-formed, non-empty From address,
  • has Value no less than zero and no greater than the total token supply (2e9 * 1e18), and
  • has non-negative GasPrice,
  • has GasLimit that is at least equal to the gas consumption associated with the message’s serialized bytes,
  • has GasLimit that is no greater than the block gas limit network parameter.
type Message struct {
	// Version of this message (has to be non-negative)
	Version uint64

	// Address of the receiving actor.
	To   address.Address
	// Address of the sending actor.
	From address.Address

	CallSeqNum uint64

	// Value to transfer from sender's to receiver's balance.
	Value BigInt

	// GasPrice is a Gas-to-FIL cost
	GasPrice BigInt
	// Maximum Gas to be spent on the processing of this message
	GasLimit int64

	// Optional method to invoke on receiver, zero for a plain value transfer.
	Method abi.MethodNum
	//Serialized parameters to the method.
	Params []byte
}

There should be several functions able to extract information from the Message struct, such as the sender and recipient addresses, the value to be transferred, the required funds to execute the message and the CID of the message.

Given that Messages should eventually be included in a Block and added to the blockchain, the validity of the message should be checked with regard to the sender and the receiver of the message, the value (which should be non-negative and always smaller than the circulating supply), the gas price (which again should be non-negative) and the BlockGasLimit which should not be greater than the block’s gas limit.

Message semantic validation

Semantic validation refers to validation requiring information outside of the message itself.

A semantically valid SignedMessage must carry a signature that verifies the payload as having been signed with the public key of the account actor identified by the From address. Note that when the From address is an ID-address, the public key must be looked up in the state of the sending account actor in the parent state identified by the block.

Note: the sending actor must exist in the parent state identified by the block that includes the message. This means that it is not valid for a single block to include a message that creates a new account actor and a message from that same actor. The first message from that actor must wait until a subsequent epoch. Message pools may exclude messages from an actor that is not yet present in the chain state.

There is no further semantic validation of a message that can cause a block including the message to be invalid. Every syntactically valid and correctly signed message can be included in a block and will produce a receipt from execution. The MessageReceipt sturct includes the following:

type MessageReceipt struct {
	ExitCode exitcode.ExitCode
	Return   []byte
	GasUsed  int64
}

However, a message may fail to execute to completion, in which case it will not trigger the desired state change.

The reason for this “no message semantic validation” policy is that the state that a message will be applied to cannot be known before the message is executed as part of a tipset. A block producer does not know whether another block will precede it in the tipset, thus altering the state to which the block’s messages will apply from the declared parent state.

package types

import (
	"bytes"
	"encoding/json"
	"fmt"

	block "github.com/ipfs/go-block-format"
	"github.com/ipfs/go-cid"
	"golang.org/x/xerrors"

	"github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/network"

	"github.com/filecoin-project/lotus/build"
)

const MessageVersion = 0

type ChainMsg interface {
	Cid() cid.Cid
	VMMessage() *Message
	ToStorageBlock() (block.Block, error)
	// FIXME: This is the *message* length, this name is misleading.
	ChainLength() int
}

type Message struct {
	Version uint64

	To   address.Address
	From address.Address

	Nonce uint64

	Value abi.TokenAmount

	GasLimit   int64
	GasFeeCap  abi.TokenAmount
	GasPremium abi.TokenAmount

	Method abi.MethodNum
	Params []byte
}

func (m *Message) Caller() address.Address {
	return m.From
}

func (m *Message) Receiver() address.Address {
	return m.To
}

func (m *Message) ValueReceived() abi.TokenAmount {
	return m.Value
}

func DecodeMessage(b []byte) (*Message, error) {
	var msg Message
	if err := msg.UnmarshalCBOR(bytes.NewReader(b)); err != nil {
		return nil, err
	}

	if msg.Version != MessageVersion {
		return nil, fmt.Errorf("decoded message had incorrect version (%d)", msg.Version)
	}

	return &msg, nil
}

func (m *Message) Serialize() ([]byte, error) {
	buf := new(bytes.Buffer)
	if err := m.MarshalCBOR(buf); err != nil {
		return nil, err
	}
	return buf.Bytes(), nil
}

func (m *Message) ChainLength() int {
	ser, err := m.Serialize()
	if err != nil {
		panic(err)
	}
	return len(ser)
}

func (m *Message) ToStorageBlock() (block.Block, error) {
	data, err := m.Serialize()
	if err != nil {
		return nil, err
	}

	c, err := abi.CidBuilder.Sum(data)
	if err != nil {
		return nil, err
	}

	return block.NewBlockWithCid(data, c)
}

func (m *Message) Cid() cid.Cid {
	b, err := m.ToStorageBlock()
	if err != nil {
		panic(fmt.Sprintf("failed to marshal message: %s", err)) // I think this is maybe sketchy, what happens if we try to serialize a message with an undefined address in it?
	}

	return b.Cid()
}

type mCid struct {
	*RawMessage
	CID cid.Cid
}

type RawMessage Message

func (m *Message) MarshalJSON() ([]byte, error) {
	return json.Marshal(&mCid{
		RawMessage: (*RawMessage)(m),
		CID:        m.Cid(),
	})
}

func (m *Message) RequiredFunds() BigInt {
	return BigMul(m.GasFeeCap, NewInt(uint64(m.GasLimit)))
}

func (m *Message) VMMessage() *Message {
	return m
}

func (m *Message) Equals(o *Message) bool {
	return m.Cid() == o.Cid()
}

func (m *Message) EqualCall(o *Message) bool {
	m1 := *m
	m2 := *o

	m1.GasLimit, m2.GasLimit = 0, 0
	m1.GasFeeCap, m2.GasFeeCap = big.Zero(), big.Zero()
	m1.GasPremium, m2.GasPremium = big.Zero(), big.Zero()

	return (&m1).Equals(&m2)
}

func (m *Message) ValidForBlockInclusion(minGas int64, version network.Version) error {
	if m.Version != 0 {
		return xerrors.New("'Version' unsupported")
	}

	if m.To == address.Undef {
		return xerrors.New("'To' address cannot be empty")
	}

	if m.To == build.ZeroAddress && version >= network.Version7 {
		return xerrors.New("invalid 'To' address")
	}

	if !abi.AddressValidForNetworkVersion(m.To, version) {
		return xerrors.New("'To' address protocol unsupported for network version")
	}

	if m.From == address.Undef {
		return xerrors.New("'From' address cannot be empty")
	}

	if !abi.AddressValidForNetworkVersion(m.From, version) {
		return xerrors.New("'From' address protocol unsupported for network version")
	}

	if m.Value.Int == nil {
		return xerrors.New("'Value' cannot be nil")
	}

	if m.Value.LessThan(big.Zero()) {
		return xerrors.New("'Value' field cannot be negative")
	}

	if m.Value.GreaterThan(TotalFilecoinInt) {
		return xerrors.New("'Value' field cannot be greater than total filecoin supply")
	}

	if m.GasFeeCap.Int == nil {
		return xerrors.New("'GasFeeCap' cannot be nil")
	}

	if m.GasFeeCap.LessThan(big.Zero()) {
		return xerrors.New("'GasFeeCap' field cannot be negative")
	}

	if m.GasPremium.Int == nil {
		return xerrors.New("'GasPremium' cannot be nil")
	}

	if m.GasPremium.LessThan(big.Zero()) {
		return xerrors.New("'GasPremium' field cannot be negative")
	}

	if m.GasPremium.GreaterThan(m.GasFeeCap) {
		return xerrors.New("'GasFeeCap' less than 'GasPremium'")
	}

	if m.GasLimit > build.BlockGasLimit {
		return xerrors.Errorf("'GasLimit' field cannot be greater than a block's gas limit (%d > %d)", m.GasLimit, build.BlockGasLimit)
	}

	if m.GasLimit <= 0 {
		return xerrors.Errorf("'GasLimit' field %d must be positive", m.GasLimit)
	}

	// since prices might vary with time, this is technically semantic validation
	if m.GasLimit < minGas {
		return xerrors.Errorf("'GasLimit' field cannot be less than the cost of storing a message on chain %d < %d", m.GasLimit, minGas)
	}

	return nil
}

// EffectiveGasPremium returns the effective gas premium claimable by the miner
// given the supplied base fee. This method is not used anywhere except the Eth API.
//
// Filecoin clamps the gas premium at GasFeeCap - BaseFee, if lower than the
// specified premium. Returns 0 if GasFeeCap is less than BaseFee.
func (m *Message) EffectiveGasPremium(baseFee abi.TokenAmount) abi.TokenAmount {
	available := big.Sub(m.GasFeeCap, baseFee)
	// It's possible that storage providers may include messages with gasFeeCap less than the baseFee
	// In such cases, their reward should be viewed as zero
	if available.LessThan(big.NewInt(0)) {
		available = big.NewInt(0)
	}
	if big.Cmp(m.GasPremium, available) <= 0 {
		return m.GasPremium
	}
	return available
}

const TestGasLimit = 100e6

VM Runtime Environment (Inside the VM)

Receipts

A MessageReceipt contains the result of a top-level message execution. Every syntactically valid and correctly signed message can be included in a block and will produce a receipt from execution.

A syntactically valid receipt has:

  • a non-negative ExitCode,
  • a non empty Return value only if the exit code is zero, and
  • a non-negative GasUsed.
type MessageReceipt struct {
	ExitCode exitcode.ExitCode
	Return   []byte
	GasUsed  int64
}

vm/runtime Actors Interface

The Actors Interface implementation can be found here

vm/runtime VM Implementation

The Lotus implementation of the Filecoin Virtual Machine runtime can be found here

Exit Codes

There are some common runtime exit codes that are shared by different actors. Their definition can be found here.

Gas Fees

Summary

As is traditionally the case with many blockchains, Gas is a unit of measure of how much storage and/or compute resource an on-chain message operation consumes in order to be executed. At a high level, it works as follows: the message sender specifies the maximum amount they are willing to pay in order for their message to be executed and included in a block. This is specified both in terms of total number of units of gas (GasLimit), which is generally expected to be higher than the actual GasUsed and in terms of the price (or fee) per unit of gas (GasFeeCap).

Traditionally, GasUsed * GasFeeCap goes to the block producing miner as a reward. The result of this product is treated as the priority fee for message inclusion, that is, messages are ordered in decreasing sequence and those with the highest GasUsed * GasFeeCap are prioritised, given that they return more profit to the miner.

However, it has been observed that this tactic (of paying GasUsed * GasFee) is problematic for block producing miners for a few reasons. Firstly, a block producing miner may include a very expensive message (in terms of chain resources required) for free in which case the chain itself needs to bear the cost. Secondly, message senders can set arbitrarily high prices but for low-cost messages (again, in terms of chain resources), leading to a DoS vulnerability.

In order to overcome this situation, the Filecoin blockchain defines a BaseFee, which is burnt for every message. The rationale is that given that Gas is a measure of on-chain resource consumption, it makes sense for it to be burned, as compared to be rewarded to miners. This way, fee manipulation from miners is avoided. The BaseFee is dynamic, adjusted automatically according to network congestion. This fact, makes the network resilient against spam attacks. Given that the network load increases during SPAM attacks, maintaining full blocks of SPAM messages for an extended period of time is impossible for an attacker due to the increasing BaseFee.

Finally, GasPremium is the priority fee included by senders to incentivize miners to pick the most profitable messages. In other words, if a message sender wants its message to be included more quickly, they can set a higher GasPremium.

Parameters

  • GasUsed is a measure of the amount of resources (or units of gas) consumed, in order to execute a message. Each unit of gas is measured in attoFIL and therefore, GasUsed is a number that represents the units of energy consumed. GasUsed is independent of whether a message was executed correctly or failed.
  • BaseFee is the set price per unit of gas (measured in attoFIL/gas unit) to be burned (sent to an unrecoverable address) for every message execution. The value of the BaseFee is dynamic and adjusts according to current network congestion parameters. For example, when the network exceeds 5B gas limit usage, the BaseFee increases and the opposite happens when gas limit usage falls below 5B. The BaseFee applied to each block should be included in the block itself. It should be possible to get the value of the current BaseFee from the head of the chain. The BaseFee applies per unit of GasUsed and therefore, the total amount of gas burned for a message is BaseFee * GasUsed. Note that the BaseFee is incurred for every message, but its value is the same for all messages in the same block.
  • GasLimit is measured in units of gas and set by the message sender. It imposes a hard limit on the amount of gas (i.e., number of units of gas) that a message’s execution should be allowed to consume on chain. A message consumes gas for every fundamental operation it triggers, and a message that runs out of gas fails. When a message fails, every modification to the state that happened as a result of this message’s execution is reverted back to its previous state. Independently of whether a message execution was successful or not, the miner will receive a reward for the resources they consumed to execute the message (see GasPremium below).
  • GasFeeCap is the maximum price that the message sender is willing to pay per unit of gas (measured in attoFIL/gas unit). Together with the GasLimit, the GasFeeCap is setting the maximum amount of FIL that a sender will pay for a message: a sender is guaranteed that a message will never cost them more than GasLimit * GasFeeCap attoFIL (not including any Premium that the message includes for its recipient).
  • GasPremium is the price per unit of gas (measured in attoFIL/gas) that the message sender is willing to pay (on top of the BaseFee) to “tip” the miner that will include this message in a block. A message typically earns its miner GasLimit * GasPremium attoFIL, where effectively GasPremium = GasFeeCap - BaseFee. Note that GasPremium is applied on GasLimit, as opposed to GasUsed, in order to make message selection for miners more straightforward.
ComputeGasOverestimationBurn computes amount of gas to be refunded and amount of gas to be burned Result is (refund, burn)
func ComputeGasOverestimationBurn(gasUsed, gasLimit int64) (int64, int64) {
	if gasUsed == 0 {
		return 0, gasLimit
	}

	// over = gasLimit/gasUsed - 1 - 0.1
	// over = min(over, 1)
	// gasToBurn = (gasLimit - gasUsed) * over

	// so to factor out division from `over`
	// over*gasUsed = min(gasLimit - (11*gasUsed)/10, gasUsed)
	// gasToBurn = ((gasLimit - gasUsed)*over*gasUsed) / gasUsed
	over := gasLimit - (gasOveruseNum*gasUsed)/gasOveruseDenom
	if over < 0 {
		return gasLimit - gasUsed, 0
	}

	// if we want sharper scaling it goes here:
	// over *= 2

	if over > gasUsed {
		over = gasUsed
	}

	// needs bigint, as it overflows in pathological case gasLimit > 2^32 gasUsed = gasLimit / 2
	gasToBurn := big.NewInt(gasLimit - gasUsed)
	gasToBurn = big.Mul(gasToBurn, big.NewInt(over))
	gasToBurn = big.Div(gasToBurn, big.NewInt(gasUsed))

	return gasLimit - gasUsed - gasToBurn.Int64(), gasToBurn.Int64()
}
func ComputeNextBaseFee(baseFee types.BigInt, gasLimitUsed int64, noOfBlocks int, epoch abi.ChainEpoch) types.BigInt {
	// deta := gasLimitUsed/noOfBlocks - build.BlockGasTarget
	// change := baseFee * deta / BlockGasTarget
	// nextBaseFee = baseFee + change
	// nextBaseFee = max(nextBaseFee, build.MinimumBaseFee)

	var delta int64
	if epoch > build.UpgradeSmokeHeight {
		delta = gasLimitUsed / int64(noOfBlocks)
		delta -= build.BlockGasTarget
	} else {
		delta = build.PackingEfficiencyDenom * gasLimitUsed / (int64(noOfBlocks) * build.PackingEfficiencyNum)
		delta -= build.BlockGasTarget
	}

	// cap change at 12.5% (BaseFeeMaxChangeDenom) by capping delta
	if delta > build.BlockGasTarget {
		delta = build.BlockGasTarget
	}
	if delta < -build.BlockGasTarget {
		delta = -build.BlockGasTarget
	}

	change := big.Mul(baseFee, big.NewInt(delta))
	change = big.Div(change, big.NewInt(build.BlockGasTarget))
	change = big.Div(change, big.NewInt(build.BaseFeeMaxChangeDenom))

	nextBaseFee := big.Add(baseFee, change)
	if big.Cmp(nextBaseFee, big.NewInt(build.MinimumBaseFee)) < 0 {
		nextBaseFee = big.NewInt(build.MinimumBaseFee)
	}
	return nextBaseFee
}

Notes & Implications

  • The GasFeeCap should always be higher than the network’s BaseFee. If a message’s GasFeeCap is lower than the BaseFee, then the remainder comes from the miner (as a penalty). This penalty is applied to the miner because they have selected a message that pays less than the network BaseFee (i.e., does not cover the network costs). However, a miner might want to choose a message whose GasFeeCap is smaller than the BaseFee if the same sender has another message in the message pool whose GasFeeCap is much bigger than the BaseFee. Recall, that a miner should pick all the messages of a sender from the message pool, if more than one exists. The justification is that the increased fee of the second message will pay off the loss from the first.

  • If BaseFee + GasPremium > GasFeeCap, then the miner might not earn the entire GasLimit * GasPremium as their reward.

  • A message is hard-constrained to spending no more than GasFeeCap * GasLimit. From this amount, the network BaseFee is paid (burnt) first. After that, up to GasLimit * GasPremium will be given to the miner as a reward.

  • A message that runs out of gas fails with an “out of gas” exit code. GasUsed * BaseFee will still be burned (in this case GasUsed = GasLimit), and the miner will still be rewarded GasLimit * GasPremium. This assumes that GasFeeCap > BaseFee + GasPremium.

  • A low value for the GasFeeCap will likely cause the message to be stuck in the message pool, as it will not be attractive-enough in terms of profit for any miner to pick it and include it in a block. When this happens, there is a procedure to update the GasFeeCap so that the message becomes more attractive to miners. The sender can push a new message into the message pool (which, by default, will propagate to other miners’ message pool) where: i) the identifier of the old and new messages is the same (e.g., same Nonce) and ii) the GasPremium is updated and increased by at least 25% of the previous value.

System Actors

There are eleven (11) builtin System Actors in total, but not all of them interact with the VM. Each actor is identified by a Code ID (or CID).

There are four (4) system actors required for VM processing:

  • the InitActor, which initializes new actors and records the network name, and
  • the CronActor, a scheduler actor that runs critical functions at every epoch.

There are another two actors that interact with the VM:

  • the AccountActor responsible for user accounts (non-singleton), and
  • the RewardActor for block reward and token vesting (singleton).

The remaining seven (7) builtin System Actors that do not interact directly with the VM are the following:

  • StorageMarketActor: responsible for managing storage and retrieval deals [ Market Actor Repo]
  • StorageMinerActor: actor responsible to deal with storage mining operations and collect proofs [ Storage Miner Actor Repo]
  • MultisigActor (or Multi-Signature Wallet Actor): responsible for dealing with operations involving the Filecoin wallet [ Multisig Actor Repo]
  • PaymentChannelActor: responsible for setting up and settling funds related to payment channels [ Paych Actor Repo]
  • StoragePowerActor: responsible for keeping track of the storage power allocated at each storage miner [ Storage Power Actor]
  • VerifiedRegistryActor: responsible for managing verified clients [ Verifreg Actor Repo]
  • SystemActor: general system actor [ System Actor Repo]

CronActor

Built in to the genesis state, the CronActor’s dispatch table invokes the StoragePowerActor and StorageMarketActor for them to maintain internal state and process deferred events. It could in principle invoke other actors after a network upgrade.

package cron

import (
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/cbor"
	rtt "github.com/filecoin-project/go-state-types/rt"
	cron0 "github.com/filecoin-project/specs-actors/actors/builtin/cron"
	"github.com/ipfs/go-cid"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime"
)

// The cron actor is a built-in singleton that sends messages to other registered actors at the end of each epoch.
type Actor struct{}

func (a Actor) Exports() []interface{} {
	return []interface{}{
		builtin.MethodConstructor: a.Constructor,
		2:                         a.EpochTick,
	}
}

func (a Actor) Code() cid.Cid {
	return builtin.CronActorCodeID
}

func (a Actor) IsSingleton() bool {
	return true
}

func (a Actor) State() cbor.Er {
	return new(State)
}

var _ runtime.VMActor = Actor{}

//type ConstructorParams struct {
//	Entries []Entry
//}
type ConstructorParams = cron0.ConstructorParams

type EntryParam = cron0.Entry

func (a Actor) Constructor(rt runtime.Runtime, params *ConstructorParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	entries := make([]Entry, len(params.Entries))
	for i, e := range params.Entries {
		entries[i] = Entry(e) // Identical
	}
	rt.StateCreate(ConstructState(entries))
	return nil
}

// Invoked by the system after all other messages in the epoch have been processed.
func (a Actor) EpochTick(rt runtime.Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)

	var st State

	rt.StateReadonly(&st)
	for _, entry := range st.Entries {
		code := rt.Send(entry.Receiver, entry.MethodNum, nil, abi.NewTokenAmount(0), &builtin.Discard{})
		// Any error and return value are ignored.
		if code.IsError() {
			rt.Log(rtt.ERROR, "cron failed to send entry to %s, send error code %d", entry.Receiver, code)
		}
	}

	return nil
}

InitActor

The InitActor has the power to create new actors, e.g., those that enter the system. It maintains a table resolving a public key and temporary actor addresses to their canonical ID-addresses. Invalid CIDs should not get committed to the state tree.

Note that the canonical ID address does not persist in case of chain re-organization. The actor address or public key survives chain re-organization.

package init

import (
	addr "github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/cbor"
	"github.com/filecoin-project/go-state-types/exitcode"
	init0 "github.com/filecoin-project/specs-actors/actors/builtin/init"
	cid "github.com/ipfs/go-cid"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime"
	"github.com/filecoin-project/specs-actors/v8/actors/util/adt"
)

// The init actor uniquely has the power to create new actors.
// It maintains a table resolving pubkey and temporary actor addresses to the canonical ID-addresses.
type Actor struct{}

func (a Actor) Exports() []interface{} {
	return []interface{}{
		builtin.MethodConstructor: a.Constructor,
		2:                         a.Exec,
	}
}

func (a Actor) Code() cid.Cid {
	return builtin.InitActorCodeID
}

func (a Actor) IsSingleton() bool {
	return true
}

func (a Actor) State() cbor.Er { return new(State) }

var _ runtime.VMActor = Actor{}

//type ConstructorParams struct {
//	NetworkName string
//}
type ConstructorParams = init0.ConstructorParams

func (a Actor) Constructor(rt runtime.Runtime, params *ConstructorParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	st, err := ConstructState(adt.AsStore(rt), params.NetworkName)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
	rt.StateCreate(st)
	return nil
}

//type ExecParams struct {
//	CodeCID           cid.Cid `checked:"true"` // invalid CIDs won't get committed to the state tree
//	ConstructorParams []byte
//}
type ExecParams = init0.ExecParams

//type ExecReturn struct {
//	IDAddress     addr.Address // The canonical ID-based address for the actor.
//	RobustAddress addr.Address // A more expensive but re-org-safe address for the newly created actor.
//}
type ExecReturn = init0.ExecReturn

func (a Actor) Exec(rt runtime.Runtime, params *ExecParams) *ExecReturn {
	rt.ValidateImmediateCallerAcceptAny()
	callerCodeCID, ok := rt.GetActorCodeCID(rt.Caller())
	builtin.RequireState(rt, ok, "no code for caller at %s", rt.Caller())
	if !canExec(callerCodeCID, params.CodeCID) {
		rt.Abortf(exitcode.ErrForbidden, "caller type %v cannot exec actor type %v", callerCodeCID, params.CodeCID)
	}

	// Compute a re-org-stable address.
	// This address exists for use by messages coming from outside the system, in order to
	// stably address the newly created actor even if a chain re-org causes it to end up with
	// a different ID.
	uniqueAddress := rt.NewActorAddress()

	// Allocate an ID for this actor.
	// Store mapping of pubkey or actor address to actor ID
	var st State
	var idAddr addr.Address
	rt.StateTransaction(&st, func() {
		var err error
		idAddr, err = st.MapAddressToNewID(adt.AsStore(rt), uniqueAddress)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to allocate ID address")
	})

	// Create an empty actor.
	rt.CreateActor(params.CodeCID, idAddr)

	// Invoke constructor.
	code := rt.Send(idAddr, builtin.MethodConstructor, builtin.CBORBytes(params.ConstructorParams), rt.ValueReceived(), &builtin.Discard{})
	builtin.RequireSuccess(rt, code, "constructor failed")

	return &ExecReturn{IDAddress: idAddr, RobustAddress: uniqueAddress}
}

func canExec(callerCodeID cid.Cid, execCodeID cid.Cid) bool {
	switch execCodeID {
	case builtin.StorageMinerActorCodeID:
		if callerCodeID == builtin.StoragePowerActorCodeID {
			return true
		}
		return false
	case builtin.PaymentChannelActorCodeID, builtin.MultisigActorCodeID:
		return true
	default:
		return false
	}
}

RewardActor

The RewardActor is where unminted Filecoin tokens are kept. The actor distributes rewards directly to miner actors, where they are locked for vesting. The reward value used for the current epoch is updated at the end of an epoch through a cron tick.

package reward

import (
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/cbor"
	"github.com/filecoin-project/go-state-types/exitcode"
	rtt "github.com/filecoin-project/go-state-types/rt"
	reward0 "github.com/filecoin-project/specs-actors/actors/builtin/reward"
	reward6 "github.com/filecoin-project/specs-actors/v6/actors/builtin/reward"
	"github.com/ipfs/go-cid"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime"
)

// PenaltyMultiplier is the factor miner penaltys are scaled up by
const PenaltyMultiplier = 3

type Actor struct{}

func (a Actor) Exports() []interface{} {
	return []interface{}{
		builtin.MethodConstructor: a.Constructor,
		2:                         a.AwardBlockReward,
		3:                         a.ThisEpochReward,
		4:                         a.UpdateNetworkKPI,
	}
}

func (a Actor) Code() cid.Cid {
	return builtin.RewardActorCodeID
}

func (a Actor) IsSingleton() bool {
	return true
}

func (a Actor) State() cbor.Er {
	return new(State)
}

var _ runtime.VMActor = Actor{}

func (a Actor) Constructor(rt runtime.Runtime, currRealizedPower *abi.StoragePower) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)

	if currRealizedPower == nil {
		rt.Abortf(exitcode.ErrIllegalArgument, "argument should not be nil")
		return nil // linter does not understand abort exiting
	}
	st := ConstructState(*currRealizedPower)
	rt.StateCreate(st)
	return nil
}

//type AwardBlockRewardParams struct {
//	Miner     address.Address
//	Penalty   abi.TokenAmount // penalty for including bad messages in a block, >= 0
//	GasReward abi.TokenAmount // gas reward from all gas fees in a block, >= 0
//	WinCount  int64           // number of reward units won, > 0
//}
type AwardBlockRewardParams = reward0.AwardBlockRewardParams

// Awards a reward to a block producer.
// This method is called only by the system actor, implicitly, as the last message in the evaluation of a block.
// The system actor thus computes the parameters and attached value.
//
// The reward includes two components:
// - the epoch block reward, computed and paid from the reward actor's balance,
// - the block gas reward, expected to be transferred to the reward actor with this invocation.
//
// The reward is reduced before the residual is credited to the block producer, by:
// - a penalty amount, provided as a parameter, which is burnt,
func (a Actor) AwardBlockReward(rt runtime.Runtime, params *AwardBlockRewardParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	priorBalance := rt.CurrentBalance()
	if params.Penalty.LessThan(big.Zero()) {
		rt.Abortf(exitcode.ErrIllegalArgument, "negative penalty %v", params.Penalty)
	}
	if params.GasReward.LessThan(big.Zero()) {
		rt.Abortf(exitcode.ErrIllegalArgument, "negative gas reward %v", params.GasReward)
	}
	if priorBalance.LessThan(params.GasReward) {
		rt.Abortf(exitcode.ErrIllegalState, "actor current balance %v insufficient to pay gas reward %v",
			priorBalance, params.GasReward)
	}
	if params.WinCount <= 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid win count %d", params.WinCount)
	}

	minerAddr, ok := rt.ResolveAddress(params.Miner)
	if !ok {
		rt.Abortf(exitcode.ErrNotFound, "failed to resolve given owner address")
	}
	// The miner penalty is scaled up by a factor of PenaltyMultiplier
	penalty := big.Mul(big.NewInt(PenaltyMultiplier), params.Penalty)
	totalReward := big.Zero()
	var st State
	rt.StateTransaction(&st, func() {
		blockReward := big.Mul(st.ThisEpochReward, big.NewInt(params.WinCount))
		blockReward = big.Div(blockReward, big.NewInt(builtin.ExpectedLeadersPerEpoch))
		totalReward = big.Add(blockReward, params.GasReward)
		currBalance := rt.CurrentBalance()
		if totalReward.GreaterThan(currBalance) {
			rt.Log(rtt.WARN, "reward actor balance %d below totalReward expected %d, paying out rest of balance", currBalance, totalReward)
			totalReward = currBalance

			blockReward = big.Sub(totalReward, params.GasReward)
			// Since we have already asserted the balance is greater than gas reward blockReward is >= 0
			builtin.RequireState(rt, blockReward.GreaterThanEqual(big.Zero()), "programming error, block reward %v below zero", blockReward)
		}
		st.TotalStoragePowerReward = big.Add(st.TotalStoragePowerReward, blockReward)
	})

	builtin.RequireState(rt, totalReward.LessThanEqual(priorBalance), "reward %v exceeds balance %v", totalReward, priorBalance)

	// if this fails, we can assume the miner is responsible and avoid failing here.
	rewardParams := builtin.ApplyRewardParams{
		Reward:  totalReward,
		Penalty: penalty,
	}
	code := rt.Send(minerAddr, builtin.MethodsMiner.ApplyRewards, &rewardParams, totalReward, &builtin.Discard{})
	if !code.IsSuccess() {
		rt.Log(rtt.ERROR, "failed to send ApplyRewards call to the miner actor with funds: %v, code: %v", totalReward, code)
		code := rt.Send(builtin.BurntFundsActorAddr, builtin.MethodSend, nil, totalReward, &builtin.Discard{})
		if !code.IsSuccess() {
			rt.Log(rtt.ERROR, "failed to send unsent reward to the burnt funds actor, code: %v", code)
		}
	}

	return nil
}

// Changed since v0:
// - removed ThisEpochReward (unsmoothed)
//type ThisEpochRewardReturn struct {
//	ThisEpochRewardSmoothed smoothing.FilterEstimate
//	ThisEpochBaselinePower  abi.StoragePower
//}
type ThisEpochRewardReturn = reward6.ThisEpochRewardReturn

// The award value used for the current epoch, updated at the end of an epoch
// through cron tick.  In the case previous epochs were null blocks this
// is the reward value as calculated at the last non-null epoch.
func (a Actor) ThisEpochReward(rt runtime.Runtime, _ *abi.EmptyValue) *ThisEpochRewardReturn {
	rt.ValidateImmediateCallerAcceptAny()

	var st State
	rt.StateReadonly(&st)
	return &ThisEpochRewardReturn{
		ThisEpochRewardSmoothed: st.ThisEpochRewardSmoothed,
		ThisEpochBaselinePower:  st.ThisEpochBaselinePower,
	}
}

// Called at the end of each epoch by the power actor (in turn by its cron hook).
// This is only invoked for non-empty tipsets, but catches up any number of null
// epochs to compute the next epoch reward.
func (a Actor) UpdateNetworkKPI(rt runtime.Runtime, currRealizedPower *abi.StoragePower) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)
	if currRealizedPower == nil {
		rt.Abortf(exitcode.ErrIllegalArgument, "argument should not be nil")
	}

	var st State
	rt.StateTransaction(&st, func() {
		prev := st.Epoch
		// if there were null runs catch up the computation until
		// st.Epoch == rt.CurrEpoch()
		for st.Epoch < rt.CurrEpoch() {
			// Update to next epoch to process null rounds
			st.updateToNextEpoch(*currRealizedPower)
		}

		st.updateToNextEpochWithReward(*currRealizedPower)
		// only update smoothed estimates after updating reward and epoch
		st.updateSmoothedEstimates(st.Epoch - prev)
	})
	return nil
}

AccountActor

The AccountActor is responsible for user accounts. Account actors are not created by the InitActor, but their constructor is called by the system. Account actors are created by sending a message to a public-key style address. The address must be BLS or SECP, or otherwise there should be an exit error. The account actor is updating the state tree with the new actor address.

package account

import (
	addr "github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/cbor"
	"github.com/filecoin-project/go-state-types/exitcode"
	"github.com/ipfs/go-cid"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime"
)

type Actor struct{}

func (a Actor) Exports() []interface{} {
	return []interface{}{
		1: a.Constructor,
		2: a.PubkeyAddress,
	}
}

func (a Actor) Code() cid.Cid {
	return builtin.AccountActorCodeID
}

func (a Actor) State() cbor.Er {
	return new(State)
}

var _ runtime.VMActor = Actor{}

type State struct {
	Address addr.Address
}

func (a Actor) Constructor(rt runtime.Runtime, address *addr.Address) *abi.EmptyValue {
	// Account actors are created implicitly by sending a message to a pubkey-style address.
	// This constructor is not invoked by the InitActor, but by the system.
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	switch address.Protocol() {
	case addr.SECP256K1:
	case addr.BLS:
		break // ok
	default:
		rt.Abortf(exitcode.ErrIllegalArgument, "address must use BLS or SECP protocol, got %v", address.Protocol())
	}
	st := State{Address: *address}
	rt.StateCreate(&st)
	return nil
}

// Fetches the pubkey-type address from this actor.
func (a Actor) PubkeyAddress(rt runtime.Runtime, _ *abi.EmptyValue) *addr.Address {
	rt.ValidateImmediateCallerAcceptAny()
	var st State
	rt.StateReadonly(&st)
	return &st.Address
}

VM Interpreter - Message Invocation (Outside VM)

The VM interpreter orchestrates the execution of messages from a tipset on that tipset’s parent state, producing a new state and a sequence of message receipts. The CIDs of this new state and of the receipt collection are included in blocks from the subsequent epoch, which must agree about those CIDs in order to form a new tipset.

Every state change is driven by the execution of a message. The messages from all the blocks in a tipset must be executed in order to produce a next state. All messages from the first block are executed before those of second and subsequent blocks in the tipset. For each block, BLS-aggregated messages are executed first, then SECP signed messages.

Implicit messages

In addition to the messages explicitly included in each block, a few state changes at each epoch are made by implicit messages. Implicit messages are not transmitted between nodes, but constructed by the interpreter at evaluation time.

For each block in a tipset, an implicit message:

  • invokes the block producer’s miner actor to process the (already-validated) election PoSt submission, as the first message in the block;
  • invokes the reward actor to pay the block reward to the miner’s owner account, as the final message in the block;

For each tipset, an implicit message:

  • invokes the cron actor to process automated checks and payments, as the final message in the tipset.

All implicit messages are constructed with a From address being the distinguished system account actor. They specify a gas price of zero, but must be included in the computation. They must succeed (have an exit code of zero) in order for the new state to be computed. Receipts for implicit messages are not included in the receipt list; only explicit messages have an explicit receipt.

Gas payments

In most cases, the sender of a message pays the miner which produced the block including that message a gas fee for its execution.

The gas payments for each message execution are paid to the miner owner account immediately after that message is executed. There are no encumbrances to either the block reward or gas fees earned: both may be spent immediately.

Duplicate messages

Since different miners produce blocks in the same epoch, multiple blocks in a single tipset may include the same message (identified by the same CID). When this happens, the message is processed only the first time it is encountered in the tipset’s canonical order. Subsequent instances of the message are ignored and do not result in any state mutation, produce a receipt, or pay gas to the block producer.

The sequence of executions for a tipset is thus summarised:

  • pay reward for first block
  • process election post for first block
  • messages for first block (BLS before SECP)
  • pay reward for second block
  • process election post for second block
  • messages for second block (BLS before SECP, skipping any already encountered)
  • [... subsequent blocks ...]
  • cron tick

Message validity and failure

Every message in a valid block can be processed and produce a receipt (note that block validity implies all messages are syntactically valid – see Message Syntax – and correctly signed). However, execution may or may not succeed, depending on the state to which the message is applied. If the execution of a message fails, the corresponding receipt will carry a non-zero exit code.

If a message fails due to a reason that can reasonably be attributed to the miner including a message that could never have succeeded in the parent state, or because the sender lacks funds to cover the maximum message cost, then the miner pays a penalty by burning the gas fee (rather than the sender paying fees to the block miner).

The only state changes resulting from a message failure are either:

  • incrementing of the sending actor’s CallSeqNum, and payment of gas fees from the sender to the owner of the miner of the block including the message; or
  • a penalty equivalent to the gas fee for the failed message, burnt by the miner (sender’s CallSeqNum unchanged).

A message execution will fail if, in the immediately preceding state:

  • the From actor does not exist in the state (miner penalized),
  • the From actor is not an account actor (miner penalized),
  • the CallSeqNum of the message does not match the CallSeqNum of the From actor (miner penalized),
  • the From actor does not have sufficient balance to cover the sum of the message Value plus the maximum gas cost, GasLimit * GasPrice (miner penalized),
  • the To actor does not exist in state and the To address is not a pubkey-style address,
  • the To actor exists (or is implicitly created as an account) but does not have a method corresponding to the non-zero MethodNum,
  • deserialized Params is not an array of length matching the arity of the To actor’s MethodNum method,
  • deserialized Params are not valid for the types specified by the To actor’s MethodNum method,
  • the invoked method consumes more gas than the GasLimit allows,
  • the invoked method exits with a non-zero code (via Runtime.Abort()), or
  • any inner message sent by the receiver fails for any of the above reasons.

Note that if the To actor does not exist in state and the address is a valid H(pubkey) address, it will be created as an account actor.

Blockchain

The Filecoin Blockchain is a distributed virtual machine that achieves consensus, processes messages, accounts for storage, and maintains security in the Filecoin Protocol. It is the main interface linking various actors in the Filecoin system.

The Filecoin blockchain system includes:

  • A Message Pool subsystem that nodes use to track and propagate messages that miners have declared they want to include in the blockchain.
  • A Virtual Machine subsystem used to interpret and execute messages in order to update system state.
  • A State Tree subsystem which manages the creation and maintenance of state trees (the system state) deterministically generated by the vm from a given subchain.
  • A Chain Synchronisation (ChainSync) susbystem that tracks and propagates validated message blocks, maintaining sets of candidate chains on which the miner may mine and running syntactic validation on incoming blocks.
  • A Storage Power Consensus subsystem which tracks storage state (i.e., Storage Subystem) for a given chain and helps the blockchain system choose subchains to extend and blocks to include in them.

The blockchain system also includes:

  • A Chain Manager, which maintains a given chain’s state, providing facilities to other blockchain subsystems which will query state about the latest chain in order to run, and ensuring incoming blocks are semantically validated before inclusion into the chain.
  • A Block Producer which is called in the event of a successful leader election in order to produce a new block that will extend the current heaviest chain before forwarding it to the syncer for propagation.

At a high-level, the Filecoin blockchain grows through successive rounds of leader election in which a number of miners are elected to generate a block, whose inclusion in the chain will earn them block rewards. Filecoin’s blockchain runs on storage power. That is, its consensus algorithm by which miners agree on which subchain to mine is predicated on the amount of storage backing that subchain. At a high-level, the Storage Power Consensus subsystem maintains a Power Table that tracks the amount of storage that storage miner actors have contributed to the network through Sector commitments and Proofs of Spacetime.

Blocks

The Block is the main unit of the Filecoin blockchain, as is also the case with most other blockchains. Block messages are directly linked with Tipsets, which are groups of Block messages as detailed later on in this section. In the following we discuss the main structure of a Block message and the process of validating Block messages in the Filecoin blockchain.

Block

The Block is the main unit of the Filecoin blockchain.

The Block structure in the Filecoin blockchain is composed of: i) the Block Header, ii) the list of messages inside the block, and iii) the signed messages. This is represented inside the FullBlock abstraction. The messages indicate the required set of changes to apply in order to arrive at a deterministic state of the chain.

The Lotus implementation of the block has the following struct:

type FullBlock struct {
	Header        *BlockHeader
	BlsMessages   []*Message
	SecpkMessages []*SignedMessage
}

Note
A block is functionally the same as a block header in the Filecoin protocol. While a block header contains Merkle links to the full system state, messages, and message receipts, a block can be thought of as the full set of this information (not just the Merkle roots, but rather the full data of the state tree, message tree, receipts tree, etc.). Because a full block is large in size, the Filecoin blockchain consists of block headers rather than full blocks. We often use the terms block and block header interchangeably.

A BlockHeader is a canonical representation of a block. BlockHeaders are propagated between miner nodes. From the BlockHeader message, a miner has all the required information to apply the associated FullBlock’s state and update the chain. In order to be able to do this, the minimum set of information items that need to be included in the BlockHeader are shown below and include among others: the miner’s address, the Ticket, the Proof of SpaceTime, the CID of the parents where this block evolved from in the IPLD DAG, as well as the messages’ own CIDs.

The Lotus implementation of the block header has the following structs:

type BlockHeader struct {
	Miner address.Address // 0 unique per block/miner

	Ticket                *Ticket           // 1 unique per block/miner: should be a valid VRF
	ElectionProof         *ElectionProof    // 2 unique per block/miner: should be a valid VRF
	BeaconEntries         []BeaconEntry     // 3 identical for all blocks in same tipset
	WinPoStProof          []proof.PoStProof // 4 unique per block/miner
	Parents               []cid.Cid         // 5 identical for all blocks in same tipset
	ParentWeight          BigInt            // 6 identical for all blocks in same tipset
	Height                abi.ChainEpoch    // 7 identical for all blocks in same tipset
	ParentStateRoot       cid.Cid           // 8 identical for all blocks in same tipset
	ParentMessageReceipts cid.Cid           // 9 identical for all blocks in same tipset
	Messages              cid.Cid           // 10 unique per block
	BLSAggregate          *crypto.Signature // 11 unique per block: aggrregate of BLS messages from above
	Timestamp             uint64            // 12 identical for all blocks in same tipset / hard-tied to the value of Height above
	BlockSig              *crypto.Signature // 13 unique per block/miner: miner signature
	ForkSignaling         uint64            // 14 currently unused/undefined
	ParentBaseFee         abi.TokenAmount   // 15 identical for all blocks in same tipset: the base fee after executing parent tipset

	validated bool // internal, true if the signature has been validated
}
type Ticket struct {
	VRFProof []byte
}
type ElectionProof struct {
	WinCount int64
	VRFProof []byte
}
type BeaconEntry struct {
	Round uint64
	Data  []byte
}

The BlockHeader structure has to refer to the TicketWinner of the current round which ensures the correct winner is passed to ChainSync.

func IsTicketWinner(vrfTicket []byte, mypow BigInt, totpow BigInt) bool

The Message structure has to include the source (From) and destination (To) addresses, a Nonce and the GasPrice.

The Lotus implementation of the message has the following structure:

type Message struct {
	Version uint64

	To   address.Address
	From address.Address

	Nonce uint64

	Value abi.TokenAmount

	GasLimit   int64
	GasFeeCap  abi.TokenAmount
	GasPremium abi.TokenAmount

	Method abi.MethodNum
	Params []byte
}

The message is also validated before it is passed to the chain synchronization logic:

func (m *Message) ValidForBlockInclusion(minGas int64, version network.Version) error {
	if m.Version != 0 {
		return xerrors.New("'Version' unsupported")
	}

	if m.To == address.Undef {
		return xerrors.New("'To' address cannot be empty")
	}

	if m.To == build.ZeroAddress && version >= network.Version7 {
		return xerrors.New("invalid 'To' address")
	}

	if !abi.AddressValidForNetworkVersion(m.To, version) {
		return xerrors.New("'To' address protocol unsupported for network version")
	}

	if m.From == address.Undef {
		return xerrors.New("'From' address cannot be empty")
	}

	if !abi.AddressValidForNetworkVersion(m.From, version) {
		return xerrors.New("'From' address protocol unsupported for network version")
	}

	if m.Value.Int == nil {
		return xerrors.New("'Value' cannot be nil")
	}

	if m.Value.LessThan(big.Zero()) {
		return xerrors.New("'Value' field cannot be negative")
	}

	if m.Value.GreaterThan(TotalFilecoinInt) {
		return xerrors.New("'Value' field cannot be greater than total filecoin supply")
	}

	if m.GasFeeCap.Int == nil {
		return xerrors.New("'GasFeeCap' cannot be nil")
	}

	if m.GasFeeCap.LessThan(big.Zero()) {
		return xerrors.New("'GasFeeCap' field cannot be negative")
	}

	if m.GasPremium.Int == nil {
		return xerrors.New("'GasPremium' cannot be nil")
	}

	if m.GasPremium.LessThan(big.Zero()) {
		return xerrors.New("'GasPremium' field cannot be negative")
	}

	if m.GasPremium.GreaterThan(m.GasFeeCap) {
		return xerrors.New("'GasFeeCap' less than 'GasPremium'")
	}

	if m.GasLimit > build.BlockGasLimit {
		return xerrors.Errorf("'GasLimit' field cannot be greater than a block's gas limit (%d > %d)", m.GasLimit, build.BlockGasLimit)
	}

	if m.GasLimit <= 0 {
		return xerrors.Errorf("'GasLimit' field %d must be positive", m.GasLimit)
	}

	// since prices might vary with time, this is technically semantic validation
	if m.GasLimit < minGas {
		return xerrors.Errorf("'GasLimit' field cannot be less than the cost of storing a message on chain %d < %d", m.GasLimit, minGas)
	}

	return nil
}
Block syntax validation

Syntax validation refers to validation that should be performed on a block and its messages without reference to outside information such as the parent state tree. This type of validation is sometimes called static validation.

An invalid block must not be transmitted or referenced as a parent.

A syntactically valid block header must decode into fields matching the definitions below, must be a valid CBOR PubSub BlockMsg message and must have:

  • between 1 and 5*ec.ExpectedLeaders Parents CIDs if Epoch is greater than zero (else empty Parents),
  • a non-negative ParentWeight,
  • less than or equal to BlockMessageLimit number of messages,
  • aggregate message CIDs, encapsulated in the MsgMeta structure, serialized to the Messages CID in the block header,
  • a Miner address which is an ID-address. The Miner Address in the block header should be present and correspond to a public-key address in the current chain state.
  • Block signature (BlockSig) that belongs to the public-key address retrieved for the Miner
  • a non-negative Epoch,
  • a positive Timestamp,
  • a Ticket with non-empty VRFResult,
  • ElectionPoStOutput containing:
    • a Candidates array with between 1 and EC.ExpectedLeaders values (inclusive),
    • a non-empty PoStRandomness field,
    • a non-empty Proof field,
  • a non-empty ForkSignal field.

A syntactically valid full block must have:

  • all referenced messages syntactically valid,
  • all referenced parent receipts syntactically valid,
  • the sum of the serialized sizes of the block header and included messages is no greater than block.BlockMaxSize,
  • the sum of the gas limit of all explicit messages is no greater than block.BlockGasLimit.

Note that validation of the block signature requires access to the miner worker address and public key from the parent tipset state, so signature validation forms part of semantic validation. Similarly, message signature validation requires lookup of the public key associated with each message’s From account actor in the block’s parent state.

Block semantic validation

Semantic validation refers to validation that requires reference to information outside the block header and messages themselves. Semantic validation relates to the parent tipset and state on which the block is built.

In order to proceed to semantic validation the FullBlock must be assembled from the received block header retrieving its Filecoin messages. Block message CIDs can be retrieved from the network and be decoded into valid CBOR Message/SignedMessage.

In the Lotus implementation the semantic validation of a block is carried out by the Syncer module:

ValidateBlock should match up with ‘Semantical Validation’ in validation.md in the spec
func (syncer *Syncer) ValidateBlock(ctx context.Context, b *types.FullBlock, useCache bool) (err error) {
	defer func() {
		// b.Cid() could panic for empty blocks that are used in tests.
		if rerr := recover(); rerr != nil {
			err = xerrors.Errorf("validate block panic: %w", rerr)
			return
		}
	}()

	if useCache {
		isValidated, err := syncer.store.IsBlockValidated(ctx, b.Cid())
		if err != nil {
			return xerrors.Errorf("check block validation cache %s: %w", b.Cid(), err)
		}

		if isValidated {
			return nil
		}
	}

	validationStart := build.Clock.Now()
	defer func() {
		stats.Record(ctx, metrics.BlockValidationDurationMilliseconds.M(metrics.SinceInMilliseconds(validationStart)))
		log.Infow("block validation", "took", time.Since(validationStart), "height", b.Header.Height, "age", time.Since(time.Unix(int64(b.Header.Timestamp), 0)))
	}()

	ctx, span := trace.StartSpan(ctx, "validateBlock")
	defer span.End()

	if err := syncer.consensus.ValidateBlock(ctx, b); err != nil {
		return err
	}

	if useCache {
		if err := syncer.store.MarkBlockAsValidated(ctx, b.Cid()); err != nil {
			return xerrors.Errorf("caching block validation %s: %w", b.Cid(), err)
		}
	}

	return nil
}

Messages are retrieved through the Syncer. There are the following two steps followed by the Syncer: 1- Assemble a FullTipSet populated with the single block received earlier. The Block’s ParentWeight is greater than the one from the (first block of the) heaviest tipset. 2- Retrieve all tipsets from the received Block down to our chain. Validation is expanded to every block inside these tipsets. The validation should ensure that: - Beacon entires are ordered by their round number. - The Tipset Parents CIDs match the fetched parent tipset through BlockSync.

A semantically valid block must meet all of the following requirements.

Parents-Related

  • Parents listed in lexicographic order of their header’s Ticket.
  • ParentStateRoot CID of the block matches the state CID computed from the parent Tipset.
  • ParentState matches the state tree produced by executing the parent tipset’s messages (as defined by the VM interpreter) against that tipset’s parent state.
  • ParentMessageReceipts identifying the receipt list produced by parent tipset execution, with one receipt for each unique message from the parent tipset. In other words, the Block’s ParentMessageReceipts CID matches the receipts CID computed from the parent tipset.
  • ParentWeight matches the weight of the chain up to and including the parent tipset.

Time-Related

  • Epoch is greater than that of its Parents, and
    • not in the future according to the node’s local clock reading of the current epoch,
      • blocks with future epochs should not be rejected, but should not be evaluated (validated or included in a tipset) until the appropriate epoch
    • not farther in the past than the soft finality as defined by SPC Finality,
      • this rule only applies when receiving new gossip blocks (i.e. from the current chain head), not when syncing to the chain for the first time.
  • The Timestamp included is in seconds that:
    • must not be bigger than current time plus ΑllowableClockDriftSecs
    • must not be smaller than previous block’s Timestamp plus BlockDelay (including null blocks)
    • is of the precise value implied by the genesis block’s timestamp, the network’s Βlock time and the Βlock’s Epoch.

Miner-Related

  • The Miner is active in the storage power table in the parent tipset state. The Miner’s address is registered in the Claims HAMT of the Power Actor
  • The TipSetState should be included for each tipset being validated.
    • Every Block in the tipset should belong to different a miner.
  • The Actor associated with the message’s From address exists, is an account actor and its Nonce matches the message Nonce.
  • Valid proofs that the Miner proved access to sealed versions of the sectors it was challenged for are included. In order to achieve that:
    • draw randomness for current epoch with WinningPoSt domain separation tag.
    • get list of sectors challanged in this epoch for this miner, based on the randomness drawn.
  • Miner is not slashed in StoragePowerActor.

Beacon- & Ticket-Related

  • Valid BeaconEntries should be included:
    • Check that every one of the BeaconEntries is a signature of a message: previousSignature || round signed using DRAND’s public key.
    • All entries between MaxBeaconRoundForEpoch down to prevEntry (from previous tipset) should be included.
  • A Ticket derived from the minimum ticket from the parent tipset’s block headers,
    • Ticket.VRFResult validly signed by the Miner actor’s worker account public key,
  • ElectionProof Ticket is computed correctly by checking BLS signature using miner’s key. The ElectionProof ticket should be a winning ticket.

Message- & Signature-Related

  • secp256k1 messages are correctly signed by their sending actor’s (From) worker account key,
  • A BLSAggregate signature is included that signs the array of CIDs of all the BLS messages referenced by the block with their sending actor’s key.
  • A valid Signature over the block header’s fields from the block’s Miner actor’s worker account public key is included.
  • For each message in ValidForBlockInclusion() the following hold:
    • Message fields Version, To, From, Value, GasPrice, and GasLimit are correctly defined.
    • Message GasLimit is under the message minimum gas cost (derived from chain height and message length).
  • For each message in ApplyMessage (that is before a message is executed), the following hold:
    • Basic gas and value checks in checkMessage():
      • The Message GasLimit is bigger than zero.
      • The Message GasPrice and Value are set.
    • The Message’s storage gas cost is under the message’s GasLimit.
    • The Message’s Nonce matches the nonce in the Actor retrieved from the message’s From address.
    • The Message’s maximum gas cost (derived from its GasLimit, GasPrice, and Value) is under the balance of the Actor retrieved from message’s From address.
    • The Message’s transfer Value is under the balance of the Actor retrieved from message’s From address.

There is no semantic validation of the messages included in a block beyond validation of their signatures. If all messages included in a block are syntactically valid then they may be executed and produce a receipt.

A chain sync system may perform syntactic and semantic validation in stages in order to minimize unnecessary resource expenditure.

If all of the above tests are successful, the block is marked as validated. Ultimately, an invalid block must not be propagated further or validated as a parent node.

Tipset

Expected Consensus probabilistically elects multiple leaders in each epoch meaning a Filecoin chain may contain zero or multiple blocks at each epoch (one per elected miner). Blocks from the same epoch are assembled into tipsets. The VM Interpreter modifies the Filecoin state tree by executing all messages in a tipset (after de-duplication of identical messages included in more than one block).

Each block references a parent tipset and validates that tipset’s state, while proposing messages to be included for the current epoch. The state to which a new block’s messages apply cannot be known until that block is incorporated into a tipset. It is thus not meaningful to execute the messages from a single block in isolation: a new state tree is only known once all messages in that block’s tipset are executed.

A valid tipset contains a non-empty collection of blocks that have distinct miners and all specify identical:

  • Epoch
  • Parents
  • ParentWeight
  • StateRoot
  • ReceiptsRoot

The blocks in a tipset are canonically ordered by the lexicographic ordering of the bytes in each block’s ticket, breaking ties with the bytes of the CID of the block itself.

Due to network propagation delay, it is possible for a miner in epoch N+1 to omit valid blocks mined at epoch N from their parent tipset. This does not make the newly generated block invalid, it does however reduce its weight and chances of being part of the canonical chain in the protocol as defined by EC’s Chain Selection function.

Block producers are expected to coordinate how they select messages for inclusion in blocks in order to avoid duplicates and thus maximize their expected earnings from message fees (see Message Pool).

The main Tipset structure in the Lotus implementation includes the following:

type TipSet struct {
	cids   []cid.Cid
	blks   []*BlockHeader
	height abi.ChainEpoch
}

Semantic validation of a Tipset includes the following checks.

Checks:

  • A tipset is composed of at least one block. (Because of our variable number of blocks per tipset, determined by randomness, we do not impose an upper limit.)
  • All blocks have the same height.
  • All blocks have the same parents (same number of them and matching CIDs).
func NewTipSet(blks []*BlockHeader) (*TipSet, error) {
	if len(blks) == 0 {
		return nil, xerrors.Errorf("NewTipSet called with zero length array of blocks")
	}

	sort.Slice(blks, tipsetSortFunc(blks))

	var ts TipSet
	ts.cids = []cid.Cid{blks[0].Cid()}
	ts.blks = blks
	for _, b := range blks[1:] {
		if b.Height != blks[0].Height {
			return nil, fmt.Errorf("cannot create tipset with mismatching heights")
		}

		if len(blks[0].Parents) != len(b.Parents) {
			return nil, fmt.Errorf("cannot create tipset with mismatching number of parents")
		}

		for i, cid := range b.Parents {
			if cid != blks[0].Parents[i] {
				return nil, fmt.Errorf("cannot create tipset with mismatching parents")
			}
		}

		ts.cids = append(ts.cids, b.Cid())

	}
	ts.height = blks[0].Height

	return &ts, nil
}

Chain Manager

The Chain Manager is a central component in the blockchain system. It tracks and updates competing subchains received by a given node in order to select the appropriate blockchain head: the latest block of the heaviest subchain it is aware of in the system.

In so doing, the chain manager is the central subsystem that handles bookkeeping for numerous other systems in a Filecoin node and exposes convenience methods for use by those systems, enabling systems to sample randomness from the chain for instance, or to see which block has been finalized most recently.

Chain Extension
Incoming block reception

For every incoming block, even if the incoming block is not added to the current heaviest tipset, the chain manager should add it to the appropriate subchain it is tracking, or keep track of it independently until either:

  • it is able to add to the current heaviest subchain, through the reception of another block in that subchain, or
  • it is able to discard it, as the block was mined before finality.

It is important to note that ahead of finality, a given subchain may be abandoned for another, heavier one mined in a given round. In order to rapidly adapt to this, the chain manager must maintain and update all subchains being considered up to finality.

Chain selection is a crucial component of how the Filecoin blockchain works. In brief, every chain has an associated weight accounting for the number of blocks mined on it and so the power (storage) they track. The full details of how selection works are provided in the Chain Selection section.

Notes/Recommendations:

  1. In order to make certain validation checks simpler, blocks should be indexed by height and by parent set. That way sets of blocks with a given height and common parents may be quickly queried.
  2. It may also be useful to compute and cache the resultant aggregate state of blocks in these sets, this saves extra state computation when checking which state root to start a block at when it has multiple parents.
  3. It is recommended that blocks are kept in the local datastore regardless of whether they are understood as the best tip at this point - this is to avoid having to refetch the same blocks in the future.
ChainTipsManager

The Chain Tips Manager is a subcomponent of Filecoin consensus that is responsible for tracking all live tips of the Filecoin blockchain, and tracking what the current ‘best’ tipset is.

// Returns the ticket that is at round 'r' in the chain behind 'head'
func TicketFromRound(head Tipset, r Round) {}

// Returns the tipset that contains round r (Note: multiple rounds' worth of tickets may exist within a single block due to losing tickets being added to the eventually successfully generated block)
func TipsetFromRound(head Tipset, r Round) {}

// GetBestTipset returns the best known tipset. If the 'best' tipset hasn't changed, then this
// will return the previous best tipset.
func GetBestTipset()

// Adds the losing ticket to the chaintips manager so that blocks can be mined on top of it
func AddLosingTicket(parent Tipset, t Ticket)

Block Producer

Mining Blocks

A miner registered with the storage power actor may begin generating and checking election tickets if it has proven storage that meets the Minimum Miner Size threshold requirement.

In order to do so, the miner must be running chain validation, and be keeping track of the most recent blocks received. A miner’s new block will be based on parents from the previous epoch.

Block Creation

Producing a block for epoch H requires waiting for the beacon entry for that epoch and using it to run GenerateElectionProof. If WinCount ≥ 1 (i.e., when the miner is elected), the same beacon entry is used to run WinningPoSt. Armed by the ElectionProof ticket (output of GenerateElectionProof) and the WinningPoSt proof, the miner can produce an new block.

See VM Interpreter for details of parent tipset evaluation, and Block for constraints on valid block header values.

To create a block, the eligible miner must compute a few fields:

  • Parents - the CIDs of the parent tipset’s blocks.
  • ParentWeight - the parent chain’s weight (see Chain Selection).
  • ParentState - the CID of the state root from the parent tipset state evaluation (see the VM Interpreter).
  • ParentMessageReceipts - the CID of the root of an AMT containing receipts produced while computing ParentState.
  • Epoch - the block’s epoch, derived from the Parents epoch and the number of epochs it took to generate this block.
  • Timestamp - a Unix timestamp, in seconds, generated at block creation.
  • BeaconEntries - a set of drand entries generated since the last block (see Beacon Entries).
  • Ticket - a new ticket generated from that in the prior epoch (see Ticket Generation).
  • Miner - the block producer’s miner actor address.
  • Messages - The CID of a TxMeta object containing message proposed for inclusion in the new block:
    • Select a set of messages from the mempool to include in the block, satisfying block size and gas limits
    • Separate the messages into BLS signed messages and secpk signed messages
    • TxMeta.BLSMessages: The CID of the root of an AMT comprising the bare UnsignedMessages
    • TxMeta.SECPMessages: the CID of the root of an AMT comprising the SignedMessages
  • BeaconEntries: a list of beacon entries to derive randomness from
  • BLSAggregate - The aggregated signature of all messages in the block that used BLS signing.
  • Signature - A signature with the miner’s worker account private key (must also match the ticket signature) over the block header’s serialized representation (with empty signature).
  • ForkSignaling - A uint64 flag used as part of signaling forks. Should be set to 0 by default.

Note that the messages to be included in a block need not be evaluated in order to produce a valid block. A miner may wish to speculatively evaluate the messages anyway in order to optimize for including messages which will succeed in execution and pay the most gas.

The block reward is not evaluated when producing a block. It is paid when the block is included in a tipset in the following epoch.

The block’s signature ensures integrity of the block after propagation, since unlike many PoW blockchains, a winning ticket is found independently of block generation.

Block Broadcast

An eligible miner propagates the completed block to the network using the GossipSub /fil/blocks topic and, assuming everything was done correctly, the network will accept it and other miners will mine on top of it, earning the miner a block reward.

Miners should output their valid block as soon as it is produced, otherwise they risk other miners receiving the block after the EPOCH_CUTOFF and not including them in the current epoch.

Block Rewards

Block rewards are handled by the Reward Actor. Further details on the Block Reward are discussed in the Filecoin Token section and details about the Block Reward Collateral are discussed in the Miner Collaterals section.

Message Pool

The Message Pool, or mpool or mempool is a pool of messages in the Filecoin protocol. It acts as the interface between Filecoin nodes and the peer-to-peer network of other nodes used for off-chain message propagation. The message pool is used by nodes to maintain a set of messages they want to transmit to the Filecoin VM and add to the chain (i.e., add for “on-chain” execution).

In order for a message to end up in the blockchain it first has to be in the message pool. In reality, at least in the Lotus implementation of Filecoin, there is no central pool of messages stored somewhere. Instead, the message pool is an abstraction and is realised as a list of messages kept by every node in the network. Therefore, when a node puts a new message in the message pool, this message is propagated to the rest of the network using libp2p’s pubsub protocol, GossipSub. Nodes need to subscribe to the corresponding pubsub topic in order to receive messages.

Message propagation using GossipSub does not happen immediately and therefore, there is some lag before message pools at different nodes can be in sync. In practice, and given continuous streams of messages being added to the message pool and the delay to propagate messages, the message pool is never synchronised across all nodes in the network. This is not a deficiency of the system, as the message pool does not need to be synchronized across the network.

The message pool should have a maximum size defined to avoid DoS attacks, where nodes are spammed and run out of memory. The recommended size for the message pool is 5000 messages.

Message Propagation

The message pool has to interface with the libp2p pubsub GossipSub protocol. This is because messages are propagated over GossipSub the corresponding /fil/msgs/ topic. Every Message is announced in the corresponding /fil/msgs/ topic by any node participating in the network.

There are two main pubsub topics related to messages and blocks: i) the /fil/msgs/ topic that carries messages and, ii) the /fil/blocks/ topic that carries blocks. The /fil/msgs/ topic is linked to the mpool. The process is as follows:

  1. When a client wants to send a message in the Filecoin network, they publish the message to the /fil/msgs/ topic.
  2. The message propagates to all other nodes in the network using GossipSub and eventually ends up in the mpool of all miners.
  3. Depending on cryptoeconomic rules, some miner will eventually pick the message from the mpool (together with other messages) and include it in a block.
  4. The miner publishes the newly-mined block in the /fil/blocks/ pubsub topic and the block propagates to all nodes in the network (including the nodes that published the messages included in this block).

Nodes must check that incoming messages are valid, that is, that they have a valid signature. If the message is not valid it should be dropped and must not be forwarded.

The updated, hardened version of the GossipSub protocol includes a number of attack mitigation strategies. For instance, when a node receives an invalid message it assigns a negative score to the sender peer. Peer scores are not shared with other nodes, but are rather kept locally by every peer for all other peers it is interacting with. If a peer’s score drops below a threshold it is excluded from the scoring peer’s mesh. We discuss more details on these settings in the GossipSub section. The full details can be found in the GossipSub Specification.

NOTES:

  • Fund Checking: It is important to note that the mpool logic is not checking whether there are enough funds in the account of the message issuer. This is checked by the miner before including a message in a block.
  • Message Sorting: Messages are sorted in the mpool of miners as they arrive according to cryptoeconomic rules followed by the miner and in order for the miner to compose the next block.

Message Storage

As mentioned earlier, there is no central pool where messages are included. Instead, every node must have allocated memory for incoming messages.

ChainSync

Blockchain synchronization (“sync”) is a key part of a blockchain system. It handles retrieval and propagation of blocks and messages, and thus is in charge of distributed state replication. As such, this process is security critical – problems with state replication can have severe consequences to the operation of a blockchain.

When a node first joins the network it discovers peers (through the peer discovery discussed above) and joins the /fil/blocks and /fil/msgs GossipSub topics. It listens to new blocks being propagated by other nodes. It picks one block as the BestTargetHead and starts syncing the blockchain up to this height from the TrustedCheckpoint, which by default is the GenesisBlock or GenesisCheckpoint. In order to pick the BestTargetHead the peer is comparing a combination of height and weight - the higher these values the higher the chances of the block being on the main chain. If there are two blocks on the same height, the peer should choose the one with the higher weight. Once the peer chooses the BestTargetHead it uses the BlockSync protocol to fetch the blocks and get to the current height. From that point on it is in CHAIN_FOLLOW mode, where it uses GossipSub to receive new blocks, or Bitswap if it hears about a block that it has not received through GossipSub.

ChainSync Overview

ChainSync is the protocol Filecoin uses to synchronize its blockchain. It is specific to Filecoin’s choices in state representation and consensus rules, but is general enough that it can serve other blockchains. ChainSync is a group of smaller protocols, which handle different parts of the sync process.

Chain synchronisation is generally needed in the following cases:

  1. when a node first joins the network and needs to get to the current state before validating or extending the chain.
  2. when a node has fell out of sync, e.g., due to a brief disconnection.
  3. during normal operation in order to keep up with the latest messages and blocks.

There are three main protocols used to achieve synchronisation for these three cases.

  • GossipSub is the libp2p pubsub protocol used to propagate messages and blocks. It is mainly used in the third process above when a node needs to stay in sync with new blocks being produced and propagated.
  • BlockSync is used to synchronise specific parts of the chain, that is from and to a specific height.
  • hello protocol, which is used when two peers first “meet” (i.e., first time they connect to each other). According to the protocol, they exchange their chain heads.

In addition, Bitswap is used to request and receive blocks, when a node is synchonized (“caught up”), but GossipSub has failed to deliver some blocks to a node. Finally, GraphSync can be used to fetch parts of the blockchain as a more efficient version of Bitswap.

Filecoin nodes are libp2p nodes, and therefore may run a variety of other protocols. As with anything else in Filecoin, nodes MAY opt to use additional protocols to achieve the results. That said, nodes MUST implement the version of ChainSync as described in this spec in order to be considered implementations of Filecoin.

Terms and Concepts

  • LastCheckpoint the last hard social-consensus oriented checkpoint that ChainSync is aware of. This consensus checkpoint defines the minimum finality, and a minimum of history to build on. ChainSync takes LastCheckpoint on faith, and builds on it, never switching away from its history.
  • TargetHeads a list of BlockCIDs that represent blocks at the fringe of block production. These are the newest and best blocks ChainSync knows about. They are “target” heads because ChainSync will try to sync to them. This list is sorted by “likelihood of being the best chain”. At this point this is simply realized through ChainWeight.
  • BestTargetHead the single best chain head BlockCID to try to sync to. This is the first element of TargetHeads

ChainSync State Machine

At a high level, ChainSync does the following:

  • Part 1: Verify internal state (INIT state below)
    • SHOULD verify data structures and validate local chain
    • Resource expensive verification MAY be skipped at nodes’ own risk
  • Part 2: Bootstrap to the network (BOOTSTRAP)
    • Step 1. Bootstrap to the network, and acquire a “secure enough” set of peers (more details below)
    • Step 2. Bootstrap to the GossipSub channels
  • Part 3: Synchronize trusted checkpoint state (SYNC_CHECKPOINT)
    • Step 1. Start with a TrustedCheckpoint (defaults to GenesisCheckpoint). The TrustedCheckpoint SHOULD NOT be verified in software, it SHOULD be verified by operators.
    • Step 2. Get the block it points to, and that block’s parents
    • Step 3. Fetch the StateTree
  • Part 4: Catch up to the chain (CHAIN_CATCHUP)
    • Step 1. Maintain a set of TargetHeads (BlockCIDs), and select the BestTargetHead from it
    • Step 2. Synchronize to the latest heads observed, validating blocks towards them (requesting intermediate points)
    • Step 3. As validation progresses, TargetHeads and BestTargetHead will likely change, as new blocks at the production fringe will arrive, and some target heads or paths to them may fail to validate.
    • Step 4. Finish when node has “caught up” with BestTargetHead (retrieved all the state, linked to local chain, validated all the blocks, etc).
  • Part 5: Stay in sync, and participate in block propagation (CHAIN_FOLLOW)
    • Step 1. If security conditions change, go back to Part 4 (CHAIN_CATCHUP)
    • Step 2. Receive, validate, and propagate received Blocks
    • Step 3. Now with greater certainty of having the best chain, finalize Tipsets, and advance chain state.

ChainSync uses the following conceptual state machine. Since this is a conceptual state machine, implementations MAY deviate from implementing precisely these states, or dividing them strictly. Implementations MAY blur the lines between the states. If so, implementations MUST ensure security of the altered protocol.

ChainSync State Machine

Peer Discovery

Peer discovery is a critical part of the overall architecture. Taking this wrong can have severe consequences for the operation of the protocol. The set of peers a new node initially connects to when joining the network may completely dominate the node’s awareness of other peers, and therefore the view of the state of the network that the node has.

Peer discovery can be driven by arbitrary external means and is pushed outside the core functionality of the protocols involved in ChainSync (i.e., GossipSub, Bitswap, BlockSync). This allows for orthogonal, application-driven development and no external dependencies for the protocol implementation. Nonetheless, the GossipSub protocol supports: i) Peer Exchange, and ii) Explicit Peering Agreements.

Peer Exchange

Peer Exchange allows applications to bootstrap from a known set of peers without an external peer discovery mechanism. This process can be realized either through bootstrap nodes or other normal peers. Bootstrap nodes must be maintained by system operators and must be configured correctly. They have to be stable and operate independently of protocol constructions, such as the GossipSub mesh construction, that is, bootstrap nodes do not maintain connections to the mesh.

For more details on Peer Exchange please refer to the GossipSub specification.

Explicit Peering Agreements

With explicit peering agreements, the operators must specify a list of peers which nodes should connect to when joining. The protocol must have options available for these to be specified. For every explicit peer, the router must establish and maintain a bidirectional (reciprocal) connection.

Progressive Block Validation

  • Blocks may be validated in progressive stages, in order to minimize resource expenditure.

  • Validation computation is considerable, and a serious DOS attack vector.

  • Secure implementations must carefully schedule validation and minimize the work done by pruning blocks without validating them fully.

  • ChainSync SHOULD keep a cache of unvalidated blocks (ideally sorted by likelihood of belonging to the chain), and delete unvalidated blocks when they are passed by FinalityTipset, or when ChainSync is under significant resource load.

  • These stages can be used partially across many blocks in a candidate chain, in order to prune out clearly bad blocks long before actually doing the expensive validation work.

  • Progressive Stages of Block Validation

    • BV0 - Syntax: Serialization, typing, value ranges.
    • BV1 - Plausible Consensus: Plausible miner, weight, and epoch values (e.g from chain state at b.ChainEpoch - consensus.LookbackParameter).
    • BV2 - Block Signature
    • BV3 - Beacon entries: Valid random beacon entries have been inserted in the block (see beacon entry validation).
    • BV4 - ElectionProof: A valid election proof was generated.
    • BV5 - WinningPoSt: Correct PoSt generated.
    • BV6 - Chain ancestry and finality: Verify block links back to trusted chain, not prior to finality.
    • BV7 - Message Signatures:
    • BV8 - State tree: Parent tipset message execution produces the claimed state tree root and receipts.

Storage Power Consensus

The Storage Power Consensus (SPC) subsystem is the main interface which enables Filecoin nodes to agree on the state of the system. Storage Power Consensus accounts for individual storage miners’ effective power over consensus in given chains in its Power Table. It also runs Expected Consensus (the underlying consensus algorithm in use by Filecoin), enabling storage miners to run leader election and generate new blocks updating the state of the Filecoin system.

Succinctly, the SPC subsystem offers the following services:

Distinguishing between storage miners and block miners

There are two ways to earn Filecoin tokens in the Filecoin network:

  • By participating in the Storage Market as a storage provider and being paid by clients for file storage deals.
  • By mining new blocks, extending the blockchain, securing the Filecoin consensus mechanism, and running smart contracts to perform state updates as a Storage Miner.

There are two types of “miners” (storage and block miners) to be distinguished. Leader Election in Filecoin is predicated on a miner’s storage power. Thus, while all block miners will be storage miners, the reverse is not necessarily true.

However, given Filecoin’s “useful Proof-of-Work” is achieved through file storage ( PoRep and PoSt), there is little overhead cost for storage miners to participate in leader election. Such a Storage Miner Actor need only register with the Storage Power Actor in order to participate in Expected Consensus and mine blocks.

On Power

Quality-adjusted power is assigned to every sector as a static function of its Sector Quality which includes: i) the Sector Spacetime, which is the product of the sector size and the promised storage duration, ii) the Deal Weight that converts spacetime occupied by deals into consensus power, iii) the Deal Quality Multiplier that depends on the type of deal done over the sector (i.e., CC, Regular Deal or Verified Client Deal), and finally, iv) the Sector Quality Multiplier, which is an average of deal quality multipliers weighted by the amount of spacetime each type of deal occupies in the sector.

The Sector Quality is a measure that maps size, duration and the type of active deals in a sector during its lifetime to its impact on power and reward distribution.

The quality of a sector depends on the deals made over the data inside the sector. There are generally three types of deals: the Committed Capacity (CC), where there is effectively no deal and the miner is storing arbitrary data inside the sector, the Regular Deals, where a miner and a client agree on a price in the market and the Verified Client deals, which give more power to the sector. We refer the reader to the Sector and Sector Quality sections for details on Sector Types and Sector Quality, the Verified Clients section for more details on what a verified client is, and the CryptoEconomics section for specific parameter values on the Deal Weights and Quality Multipliers.

Quality-Adjusted Power is the number of votes a miner has in the Secret Leader Election and has been defined to increase linearly with the useful storage that a miner has committed to the network.

More precisely, we have the following definitions:

  • Raw-byte power: the size of a sector in bytes.
  • Quality-adjusted power: the consensus power of stored data on the network, equal to Raw-byte power multiplied by the Sector Quality Multiplier.

Beacon Entries

The Filecoin protocol uses randomness produced by a drand beacon to seed unbiasable randomness seeds for use in the chain (see randomness).

In turn these random seeds are used by:

  • The sector_sealer as SealSeeds to bind sector commitments to a given subchain.
  • The post_generator as PoStChallenges to prove sectors remain committed as of a given block.
  • The Storage Power subsystem as randomness in Secret Leader Election to determine how often a miner is chosen to mine a new block.

This randomness may be drawn from various Filecoin chain epochs by the respective protocols that use them according to their security requirements.

It is important to note that a given Filecoin network and a given drand network need not have the same round time, i.e. blocks may be generated faster or slower by Filecoin than randomness is generated by drand. For instance, if the drand beacon is producing randomness twice as fast as Filecoin produces blocks, we might expect two random values to be produced in a Filecoin epoch, conversely if the Filecoin network is twice as fast as drand, we might expect a random value every other Filecoin epoch. Accordingly, depending on both networks' configurations, certain Filecoin blocks could contain multiple or no drand entries. Furthermore, it must be that any call to the drand network for a new randomness entry during an outage should be blocking, as noted with the drand.Public() calls below. In all cases, Filecoin blocks must include all drand beacon outputs generated since the last epoch in the BeaconEntries field of the block header. Any use of randomness from a given Filecoin epoch should use the last valid drand entry included in a Filecoin block. This is shown below.

Get drand randomness for VM

For operations such as PoRep creation, proof validations, or anything that requires randomness for the Filecoin VM, there should be a method that extracts the drand entry from the chain correctly. Note that the round may span multiple filecoin epochs if drand is slower; the lowest epoch number block will contain the requested beacon entry. Similarly, if there has been null rounds where the beacon should have been inserted, we need to iterate on the chain to find where the entry is inserted. Specifically, the next non-null block must contain the drand entry requested by definition.

Fetch randomness from drand network

When mining, a miner can fetch entries from the drand network to include them in the new block.

DrandBeacon connects Lotus with a drand network in order to provide randomness to the system in a way that’s aligned with Filecoin rounds/epochs.

We connect to drand peers via their public HTTP endpoints. The peers are enumerated in the drandServers variable.

The root trust for the Drand chain is configured from build.DrandChain.

type DrandBeacon struct {
	client dclient.Client

	pubkey kyber.Point

	// seconds
	interval time.Duration

	drandGenTime uint64
	filGenTime   uint64
	filRoundTime uint64

	localCache *lru.Cache[uint64, *types.BeaconEntry]
}
func BeaconEntriesForBlock(ctx context.Context, bSchedule Schedule, nv network.Version, epoch abi.ChainEpoch, parentEpoch abi.ChainEpoch, prev types.BeaconEntry) ([]types.BeaconEntry, error) {
	{
		parentBeacon := bSchedule.BeaconForEpoch(parentEpoch)
		currBeacon := bSchedule.BeaconForEpoch(epoch)
		if parentBeacon != currBeacon {
			// Fork logic
			round := currBeacon.MaxBeaconRoundForEpoch(nv, epoch)
			out := make([]types.BeaconEntry, 2)
			rch := currBeacon.Entry(ctx, round-1)
			res := <-rch
			if res.Err != nil {
				return nil, xerrors.Errorf("getting entry %d returned error: %w", round-1, res.Err)
			}
			out[0] = res.Entry
			rch = currBeacon.Entry(ctx, round)
			res = <-rch
			if res.Err != nil {
				return nil, xerrors.Errorf("getting entry %d returned error: %w", round, res.Err)
			}
			out[1] = res.Entry
			return out, nil
		}
	}

	beacon := bSchedule.BeaconForEpoch(epoch)

	start := build.Clock.Now()

	maxRound := beacon.MaxBeaconRoundForEpoch(nv, epoch)
	if maxRound == prev.Round {
		return nil, nil
	}

	// TODO: this is a sketchy way to handle the genesis block not having a beacon entry
	if prev.Round == 0 {
		prev.Round = maxRound - 1
	}

	cur := maxRound
	var out []types.BeaconEntry
	for cur > prev.Round {
		rch := beacon.Entry(ctx, cur)
		select {
		case resp := <-rch:
			if resp.Err != nil {
				return nil, xerrors.Errorf("beacon entry request returned error: %w", resp.Err)
			}

			out = append(out, resp.Entry)
			cur = resp.Entry.Round - 1
		case <-ctx.Done():
			return nil, xerrors.Errorf("context timed out waiting on beacon entry to come back for epoch %d: %w", epoch, ctx.Err())
		}
	}

	log.Debugw("fetching beacon entries", "took", build.Clock.Since(start), "numEntries", len(out))
	reverse(out)
	return out, nil
}
func (db *DrandBeacon) MaxBeaconRoundForEpoch(nv network.Version, filEpoch abi.ChainEpoch) uint64 {
	// TODO: sometimes the genesis time for filecoin is zero and this goes negative
	latestTs := ((uint64(filEpoch) * db.filRoundTime) + db.filGenTime) - db.filRoundTime

	if nv <= network.Version15 {
		return db.maxBeaconRoundV1(latestTs)
	}

	return db.maxBeaconRoundV2(latestTs)
}
Validating Beacon Entries on block reception

A Filecoin chain will contain the entirety of the beacon’s output from the Filecoin genesis to the current block.

Given their role in leader election and other critical protocols in Filecoin, a block’s beacon entries must be validated for every block. See drand for details. This can be done by ensuring every beacon entry is a valid signature over the prior one in the chain, using drand’s Verify endpoint as follows:

func ValidateBlockValues(bSchedule Schedule, nv network.Version, h *types.BlockHeader, parentEpoch abi.ChainEpoch,
	prevEntry types.BeaconEntry) error {
	{
		parentBeacon := bSchedule.BeaconForEpoch(parentEpoch)
		currBeacon := bSchedule.BeaconForEpoch(h.Height)
		if parentBeacon != currBeacon {
			if len(h.BeaconEntries) != 2 {
				return xerrors.Errorf("expected two beacon entries at beacon fork, got %d", len(h.BeaconEntries))
			}
			err := currBeacon.VerifyEntry(h.BeaconEntries[1], h.BeaconEntries[0])
			if err != nil {
				return xerrors.Errorf("beacon at fork point invalid: (%v, %v): %w",
					h.BeaconEntries[1], h.BeaconEntries[0], err)
			}
			return nil
		}
	}

	// TODO: fork logic
	b := bSchedule.BeaconForEpoch(h.Height)
	maxRound := b.MaxBeaconRoundForEpoch(nv, h.Height)
	if maxRound == prevEntry.Round {
		if len(h.BeaconEntries) != 0 {
			return xerrors.Errorf("expected not to have any beacon entries in this block, got %d", len(h.BeaconEntries))
		}
		return nil
	}

	if len(h.BeaconEntries) == 0 {
		return xerrors.Errorf("expected to have beacon entries in this block, but didn't find any")
	}

	last := h.BeaconEntries[len(h.BeaconEntries)-1]
	if last.Round != maxRound {
		return xerrors.Errorf("expected final beacon entry in block to be at round %d, got %d", maxRound, last.Round)
	}

	for i, e := range h.BeaconEntries {
		if err := b.VerifyEntry(e, prevEntry); err != nil {
			return xerrors.Errorf("beacon entry %d (%d - %x (%d)) was invalid: %w", i, e.Round, e.Data, len(e.Data), err)
		}
		prevEntry = e
	}

	return nil
}

Tickets

Filecoin block headers also contain a single “ticket” generated from its epoch’s beacon entry. Tickets are used to break ties in the Fork Choice Rule, for forks of equal weight.

Whenever comparing tickets in Filecoin, the comparison is that of the ticket’s VRF Digest’s bytes.

Randomness Ticket generation

At a Filecoin epoch n, a new ticket is generated using the appropriate beacon entry for epoch n.

The miner runs the beacon entry through a Verifiable Random Function (VRF) to get a new unique ticket. The beacon entry is prepended with the ticket domain separation tag and concatenated with the miner actor address (to ensure miners using the same worker keys get different tickets).

To generate a ticket for a given epoch n:

randSeed = GetRandomnessFromBeacon(n)
newTicketRandomness = VRF_miner(H(TicketProdDST || index || Serialization(randSeed, minerActorAddress)))

Verifiable Random Functions are used for ticket generation.

Ticket Validation

Each Ticket should be generated from the prior one in the VRF-chain and verified accordingly.

Minimum Miner Size

In order to secure Storage Power Consensus, the system defines a minimum miner size required to participate in consensus.

Specifically, miners must have either at least MIN_MINER_SIZE_STOR of power (i.e. storage power currently used in storage deals) in order to participate in leader election. If no miner has MIN_MINER_SIZE_STOR or more power, miners with at least as much power as the smallest miner in the top MIN_MINER_SIZE_TARG of miners (sorted by storage power) will be able to participate in leader election. In plain english, take MIN_MINER_SIZE_TARG = 3 for instance, this means that miners with at least as much power as the 3rd largest miner will be eligible to participate in consensus.

Miners smaller than this cannot mine blocks and earn block rewards in the network. Their power will still be counted in the total network (raw or claimed) storage power, even though their power will not be counted as votes for leader election. However, it is important to note that such miners can still have their power faulted and be penalized accordingly.

Accordingly, to bootstrap the network, the genesis block must include miners, potentially just CommittedCapacity sectors, to initiate the network.

The MIN_MINER_SIZE_TARG condition will not be used in a network in which any miner has more than MIN_MINER_SIZE_STOR power. It is nonetheless defined to ensure liveness in small networks (e.g. close to genesis or after large power drops).

Storage Power Actor

StoragePowerActorState implementation
type State struct {
	TotalRawBytePower abi.StoragePower
	// TotalBytesCommitted includes claims from miners below min power threshold
	TotalBytesCommitted  abi.StoragePower
	TotalQualityAdjPower abi.StoragePower
	// TotalQABytesCommitted includes claims from miners below min power threshold
	TotalQABytesCommitted abi.StoragePower
	TotalPledgeCollateral abi.TokenAmount

	// These fields are set once per epoch in the previous cron tick and used
	// for consistent values across a single epoch's state transition.
	ThisEpochRawBytePower     abi.StoragePower
	ThisEpochQualityAdjPower  abi.StoragePower
	ThisEpochPledgeCollateral abi.TokenAmount
	ThisEpochQAPowerSmoothed  smoothing.FilterEstimate

	MinerCount int64
	// Number of miners having proven the minimum consensus power.
	MinerAboveMinPowerCount int64

	// A queue of events to be triggered by cron, indexed by epoch.
	CronEventQueue cid.Cid // Multimap, (HAMT[ChainEpoch]AMT[CronEvent])

	// First epoch in which a cron task may be stored.
	// Cron will iterate every epoch between this and the current epoch inclusively to find tasks to execute.
	FirstCronEpoch abi.ChainEpoch

	// Claimed power for each miner.
	Claims cid.Cid // Map, HAMT[address]Claim

	ProofValidationBatch *cid.Cid // Multimap, (HAMT[Address]AMT[SealVerifyInfo])
}
StoragePowerActor implementation
func (a Actor) Exports() []interface{} {
	return []interface{}{
		builtin.MethodConstructor: a.Constructor,
		2:                         a.CreateMiner,
		3:                         a.UpdateClaimedPower,
		4:                         a.EnrollCronEvent,
		5:                         a.CronTick,
		6:                         a.UpdatePledgeTotal,
		7:                         nil, // deprecated
		8:                         a.SubmitPoRepForBulkVerify,
		9:                         a.CurrentTotalPower,
	}
}
func (a Actor) Constructor(rt Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)

	st, err := ConstructState(adt.AsStore(rt))
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
	rt.StateCreate(st)
	return nil
}
The Power Table

The portion of blocks a given miner generates through leader election in EC (and so the block rewards they earn) is proportional to their Quality-Adjusted Power Fraction over time. That is, a miner whose quality adjusted power represents 1% of total quality adjusted power on the network should mine 1% of blocks on expectation.

SPC provides a power table abstraction which tracks miner power (i.e. miner storage in relation to network storage) over time. The power table is updated for new sector commitments (incrementing miner power), for failed PoSts (decrementing miner power) or for other storage and consensus faults.

Sector ProveCommit is the first time power is proven to the network and hence power is first added upon successful sector ProveCommit. Power is also added when a sector is declared as recovered. Miners are expected to prove over all their sectors that contribute to their power.

Power is decremented when a sector expires, when a sector is declared or detected to be faulty, or when it is terminated through miner invocation. Miners can also extend the lifetime of a sector through ExtendSectorExpiration.

The Miner lifecycle in the power table should be roughly as follows:

  • MinerRegistration: A new miner with an associated worker public key and address is registered on the power table by the storage mining subsystem, along with their associated sector size (there is only one per worker).
  • UpdatePower: These power increments and decrements are called by various storage actors (and must thus be verified by every full node on the network). Specifically:
    • Power is incremented at ProveCommit, as a subcall of miner.ProveCommitSector or miner.ProveCommitAggregate
    • Power of a partition is decremented immediately after a missed WindowPoSt (DetectedFault).
    • A particular sector’s power is decremented when it enters into a faulty state either through Declared Faults or Skipped Faults.
    • A particular sector’s power is added back after recovery is declared and proven by PoSt.
    • A particular sector’s power is removed when the sector is expired or terminated through miner invovation.

To summarize, only sectors in the Active state will command power. A Sector becomes Active when it is added upon ProveCommit. Power is immediately decremented when it enters into the faulty state. Power will be restored when its declared recovery is proven. A sector’s power is removed when it is expired or terminated through miner invocation.

Pledge Collateral

Pledge Collateral is slashed for any fault affecting storage-power consensus, these include:

  • faults to expected consensus in particular (see Consensus Faults), which will be reported by a slasher to the StoragePowerActor in exchange for a reward.
  • faults affecting consensus power more generally, specifically uncommitted power faults (i.e. Storage Faults), which will be reported by the CronActor automatically or when a miner terminates a sector earlier than its promised duration.

For a more detailed discussion on Pledge Collateral, please see the Miner Collaterals section.

Token

Minting Model

Many blockchains mint tokens based on a simple exponential decay model. Under this model, block rewards are highest in the beginning, and miner participation is often the lowest, so mining generates many tokens per unit of work early in the networkʼs life, then rapidly decreases.

Over many cryptoeconomic simulations, it became clear that the simple exponential decay model would encourage short-term behavior around network launch with an unhealthy effect on the Filecoin Economy. Specifically, it would incentivize storage miners to over-invest in hardware for the sealing stage of mining to onboard storage as quickly as possible. It would be profitable to exit the network after exhausting these early rewards, even if it resulted in losing client data. This would harm the network: clients would lose data and have less access to long-term storage, and miners would have little incentive to contribute more resources to the network. Additionally, this would result in the majority of network subsidies being paid based wholly on timing, rather than actual storage (and hence value) provided to the network.

To encourage consistent storage onboarding and investment in long-term storage, not just rapid sealing, Filecoin introduces the concept of a network baseline. Instead of minting tokens based purely on elapsed time, block rewards instead scale up as total storage power on the network increases. This preserves the shape of the original exponential decay model, but softens it in the earliest days of the network. Once the network reaches the baseline, the cumulative block reward issued is identical to a simple exponential decay model, but if the network does not pass the pre-established threshold, a portion of block rewards are deferred. The overall result is that Filecoin rewards to miners more closely match the utility they, and the network as a whole, provide to clients.

Specifically, a hybrid exponential minting mechanism is introduced with a proportion of the reward coming from simple exponential decay, “Simple Minting” and the other proportion from network baseline, “Baseline Minting”. The total reward per epoch will be the sum of the two rewards. Mining Filecoin should be even more profitable with this mechanism. Simple minting allocation disproportionately rewards early miners and provides counter pressure to shocks. Baseline minting allocation mints more tokens when more value for the network has been created. More tokens are minted to facilitate greater trade when the network can unlock a greater potential. This should lead to increased creation of value for the network and lower risk of minting filecoin too quickly.

The protocol allocates 30% of Storage Mining Allocation in Simple Minting and the remaining 70% in Baseline Minting. 30% of Simple Minting can provide counter forces in the event of shocks. Baseline capacity can start from a smaller percentage of worldʼs storage today, grow at a rapid rate, and catch up to a higher but still reasonable percentage of worldʼs storage in the future. As such, the network baseline will start from 1EiB (which is less than 0.01% of the worldʼs storage today) and grow at an annual rate of 200% (higher than the usual world storage annual growth rate at 40%). The community can come together to slow down the rate of growth when the network is providing 1-10% of the worldʼs storage.

There are many features that will make passing the baseline more efficient and economical and unleash a greater share of baseline minting. The community can come together to collectively achieve these goals:

  • More performant Proof of Replication algorithms, with lower on chain footprint, faster verification time, cheaper hardware requirement, different security assumptions, resulting in sectors with longer lifetime and enabling sector upgrades without reseal.
  • A more scalable consensus algorithm that can provide greater throughput and handle larger volume with shorter finality.
  • More deal functionalities that allow sectors to last for longer.

Lastly, it is important to note that while the block reward incentivizes participation, it cannot be treated as a resource to be exploited. It is a common pool of subsidies that seeds and grows the network to benefit the economy and participants. An example of different stages of the economy and different sources of subsidies is illustrated in the following Figure.

Filecoin Economy Stages

Block Reward Minting

In this section, we provide the mathematical specification for Simple Minting, Baseline Minting and Block Reward Issuance. We will provide the details and key mathematical properties for the above concepts.

Economic parameters

  • $M_\infty$ is the total asymptotic number of tokens to be emitted as storage-mining block rewards. Per the Token Allocation spec, $M_\infty := 55\% \cdot \texttt{FIL\_BASE} = 0.55 \cdot 2\times 10^9 FIL = 1.1 \times 10^9 FIL$. The dimension of the $M_\infty$ quantity is tokens.

  • $\lambda$ is the “simple exponential decay” minting rate corresponding to a 6-year half-life. The meaning of “simple exponential decay” is that the total minted supply at time $t$ is $M_\infty \cdot (1 - e^{-\lambda t})$, so the specification of $\lambda$ in symbols becomes the equation $1 - e^{-\lambda \cdot 6yr} = \frac{1}{2}$. Note that a “year” is poorly defined. The simplified definition of $1yr := 365d$ was agreed upon for Filecoin. Of course, $1d = 86400s$, so $1yr = 31536000s$. We can solve this equation as

\[\lambda = \frac{\ln 2}{6yr} = \frac{\ln 2}{189216000s} \approx 3.663258818 \times 10^{-9} Hz\]

The dimension of the $\lambda$ quantity is time$^{-1}$.

  • $\gamma$ is the mixture between baseline and simple minting. A $\gamma$ value of 1.0 corresponds to pure baseline minting, while a $\gamma$ value of 0.0 corresponds to pure simple minting. We currently use $\gamma := 0.7$. The $\gamma$ quantity is dimensionless.

  • $b(t)$ is the baseline function, which was designed as an exponential

$$b(t) = b_0 \cdot e^{g t}$$

where

  • $b_0$ is the “initial baseline”. The dimension of the $b_0$ quantity is information.
  • $g$ is related to the baseline’s “annual growth rate” ($g_a$) by the equation $\exp(g \cdot 1yr) = 1 + g_a$, which has the solution
$$g = \frac{\ln\left(1 + g_a\right)}{31536000s}.$$

While $g_a$ is dimensionless, the dimension of the $g$ quantity is time$^{-1}$.

The dimension of the $b(t)$ quantity is information.

Simple Minting

  • $M_{\infty B}$ is the total number of tokens to be emitted via baseline minting: $M_{\infty B} = M_\infty \cdot \gamma$. Correspondingly, $M_{\infty S}$ is the total asymptotic number of tokens to be emitted via simple minting: $M_{\infty S} = M_\infty \cdot (1 - \gamma)$. Of course, $M_{\infty B} + M_{\infty S} = M_\infty$.

  • $M_S(t)$ is the total number of tokens that should ideally have been emitted by simple minting up until time $t$. It is defined as $M_S(t) = M_{\infty S} \cdot (1 - e^{-\lambda t})$. It is easy to verify that $\lim_{t\rightarrow\infty} M_S(t) = M_{\infty S}$.

Note that $M_S(t)$ is easy to calculate, and can be determined quite independently of the network’s state. (This justifies the name “simple minting”.)

Baseline Minting

To define $M_B(t)$ (which is the number of tokens that should be emitted up until time $t$ by baseline minting), we must introduce a number of auxiliary variables, some of which depend on network state.

  • $R(t)$ is the instantaneous network raw-byte power (the total amount of bytes among all active sectors) at time $t$. This quantity is state-dependent—it depends on the activities of miners on the network (specifically: commitment, expiration, faulting, and termination of sectors). The dimension of the $R(t)$ quantity is information.

  • $\overline{R}(t)$ is the capped network raw-byte power, defined as $\overline{R}(t):= \min\{b(t), R(t)\}$. Its dimension is also information.

  • $\overline{R}_\Sigma(t)$ is the cumulative capped raw-byte power, defined as $\overline{R}_\Sigma(t) := \int_0^t \overline{R}(x)\, \mathrm{d}x$. The dimension of $\overline{R_\Sigma}(t)$ is information$\cdot$time (a dimension often referred to as “spacetime”).

  • $\theta(t)$ is the “effective network time”, and is defined as the solution to the equation

$$\int_0^{\theta(t)} b(x)\, \mathrm{d}x = \int_0^t \overline{R}(x)\, \mathrm{d}x = \overline{R}_\Sigma(t)$$

By plugging in the definition of $b(x)$ and evaluating the integral, we can solve for a closed form of $\theta(t)$ as follows:

$$\int_0^{\theta(t)} b(x)\, \mathrm{d}x = \frac{b_0}{g} \left( e^{g\theta(t)} - 1 \right) = \overline{R}_\Sigma(t)$$ $$\theta(t) = \frac{1}{g} \ln \left(\frac{g \overline{R}_\Sigma(t)}{b_0}+1\right)$$
  • $M_B(t)$ is defined similarly to $M_S(t)$, just with $\theta(t)$ in place of $t$ and $M_{\infty B}$ in place of $M_{\infty S}$:
$$M_B(t) = M_{\infty B} \cdot \left(1 - e^{-\lambda \theta(t)}\right)$$

Block Reward Issuance

  • $M(t)$, the total number of tokens to be emitted as expected block rewards up until time $t$, is defined as the sum of simple and baseline minting:
$$M(t) = M_S(t) + M_B(t)$$

Now we have defined a continuous target trajectory for cumulative minting. But minting actually occurs incrementally, and also in discrete increments. Periodically, a “tipset” is formed consisting of multiple winners, each of which receives an equal, finite amount of reward. A single miner may win multiple times, but may only submit one block and may still receive rewards as if they submitted multiple winning blocks. The mechanism by which multiple wins are rewarded is multiplication by a variable called WinCount, so we refer to the finite quantity minted and awarded for each win as “reward per WinCount” or “per win reward”.

  • $\tau$ is the duration of an “epoch” or “round” (these are synonymous). Per the spec, $\tau = 30s$. The dimension of $\tau$ is time.
  • $E$ is a parameter which determines the expected number of wins per round. While $E$ could be considered dimensionless, it useful to give it a dimension of “wins”. In Filecoin, the value of $E$ is 5.
  • $W(n)$ is the total number of wins by all miners in the tipset during round $n$. This also has dimension “wins”. For each $n$, $W(n)$ is a random variable with the independent identical distribution $\mathrm{Poisson}(E)$.
  • $w(n)$ is the “reward per WinCount” or “per win reward” for round $n$. It is defined by:
$$w(n) = \frac{\max\{M(n\tau+\tau) - M(n\tau),0\}}{E}$$

The dimension of $W(n)$ is tokens$\cdot$wins$^{-1}$.

  • While $M(t)$ is a continuous target for minted supply, the discrete and random amount of tokens which have been minted as of time $t$ is
$$m(t) = \sum_{k=0}^{\left\lfloor t/\tau\right\rfloor-1} w(k) W(k)$$

$m(t)$ depends on past values of both $W(n)$ and $R(n\tau)$.

Token Allocation

Filecoinʼs token distribution is broken down as follows. A maximum of 2,000,000,000 FIL will ever be created, referred to as FIL_BASE. Of the Filecoin genesis block allocation, 10% of FIL_BASE were allocated for fundraising, of which 7.5% were sold in the 2017 token sale, and the 2.5% remaining were allocated for ecosystem development and potential future fundraising. 15% of FIL_BASE were allocated to Protocol Labs (including 4.5% for the PL team & contributors), and 5% were allocated to the Filecoin Foundation. The other 70% of all tokens were allocated to miners, as mining rewards, “for providing data storage service, maintaining the blockchain, distributing data, running contracts, and more.” There are multiple types of mining that these rewards will support over time; therefore, this allocation has been subdivided to cover different mining activities. A pie chart reflecting the FIL token allocation is shown in the following Figure.

Filecoin Token Allocation

Storage Mining Allocation. At network launch, the only mining group with allocated incentives will be storage miners. This is the earliest group of miners, and the one responsible for maintaining the core functionality of the protocol. Therefore, this group has been allocated the largest amount of mining rewards. 55% of FIL_BASE (78.6% of mining rewards) is allocated to storage mining. This will cover primarily block rewards, which reward maintaining the blockchain, running actor code, and subsidizing reliable and useful storage. This amount will also cover early storage mining rewards, such as rewards in the SpaceRace competition and other potential types of storage miner initialization, such as faucets.

Mining Reserve. The Filecoin ecosystem must ensure incentives exist for all types of miners (e.g. retrieval miners, repair miners, and including future unknown types of miners) to support a robust economy. In order to ensure the network can provide incentives for these other types of miners, 15% of FIL_BASE (21.4% of mining rewards) have been set aside as a Mining Reserve. It will be up to the community to determine in the future how to distribute those tokens, through Filecoin improvement proposals (FIPs) or similar decentralized decision making processes. For example, the community might decide to create rewards for retrieval mining or other types of mining-related activities. The Filecoin Network, like all blockchain networks and open source projects, will continue to evolve, adapt, and overcome challenges for many years. Reserving these tokens provides future flexibility for miners and the ecosystem as a whole. Other types of mining, like retrieval mining, are not yet subsidized and yet are very important to the Filecoin Economy; Arguably, those uses may need a larger percentage of mining rewards. As years pass and the network evolves, it will be up to the community to decide whether this reserve is enough, or whether to make adjustments with unmined tokens.

Market Cap. Various communities estimate the size of cryptocurrency and token networks using different analogous measures of market capitalization. The most sensible token supply for such calculations is FIL_CirculatingSupply, because unmined, unvested, locked, and burnt funds are not circulating or tradeable in the economy. Any calculations using larger measures such as FIL_BASE are likely to be erroneously inflated and not to be believed.

Total Burnt Funds. Some filecoin are burned to fund on-chain computations and bandwidth as network message fees, in addition to those burned in penalties for storage faults and consensus faults, creating long-term deflationary pressure on the token. Accompanying the network message fees is the priority fee that is not burned, but goes to the block-producing miners for including a message.

Parameter Value Description
FIL_BASE 2,000,000,000 FIL The maximum amount of FIL that will ever be created.
FIL_MiningReserveAlloc 300,000,000 FIL Tokens reserved for funding mining to support growth of the Filecoin Economy, whose future usage will be decided by the Filecoin community
FIL_StorageMiningAlloc 1,100,000,000 FIL The amount of FIL allocated to storage miners through block rewards, network initialization
FIL_Vested Sum of genesis MultisigActors.
AmountUnlocked
Total amount of FIL that is vested from genesis allocation.
FIL_StorageMined RewardActor.
TotalStoragePowerReward
The amount of FIL that has been mined by storage miners
FIL_Locked TotalPledgeCollateral + TotalProviderDealCollateral + TotalClientDealCollateral + TotalPendingDealPayment + OtherLockedFunds The amount of FIL locked as part of mining, deals, and other mechanisms.
FIL_CirculatingSupply FIL_Vested + FIL_Mined - TotalBurntFunds - FIL_Locked The amount of FIL circulating and tradeable in the economy. The basis for Market Cap calculations.
TotalBurntFunds BurntFundsActor.
Balance
Total FIL burned as part of penalties and on-chain computations.
TotalPledgeCollateral StoragePowerActor.
TotalPledgeCollateral
Total FIL locked as pledge collateral in all miners.
TotalProviderDealCollateral StorageMarketActor.
TotalProviderDealCollateral
Total FIL locked as provider deal collateral
TotalClientDealCollateral StorageMarketActor.
TotalClientDealColateral
Total FIL locked as client deal collateral
TotalPendingDealPayment StorageMarketActor.
TotalPendingDealPayment
Total FIL locked as pending client deal payment

Payment Channels

Payment channels are generally used as a mechanism to increase the scalability of blockchains and enable users to transact without involving (i.e., publishing their transactions on) the blockchain, which: i) increases the load of the system, and ii) incurs gas costs for the user. Payment channels generally use a smart contract as an agreement between the two participants. In the Filecoin blockchain Payment Channels are realised by the paychActor.

The goal of the Payment Channel Actor specified here is to enable a series of off-chain microtransactions for applications built on top of Filecoin to be reconciled on-chain at a later time with fewer messages that involve the blockchain. Payment channels are already used in the Retrieval Market of the Filecoin Network, but their applicability is not constrained within this use-case only. Hence, here, we provide a detailed description of Payment Channels in the Filecoin network and then describe how Payment Channels are used in the specific case of the Filecoin Retrieval Market.

The payment channel actor can be used to open long-lived, flexible payment channels between users. Filecoin payment channels are uni-directional and can be funded by adding to their balance. Given the context of uni-directional payment channels, we define the payment channel sender as the party that receives some service, creates the channel, deposits funds and sends payments (hence the term payment channel sender). The payment channel recipient, on the other hand is defined as the party that provides services and receives payment for the services delivered (hence the term payment channel recipient). The fact that payment channels are uni-directional means that only the payment channel sender can add funds and the recipient can receive funds. Payment channels are identified by a unique address, as is the case with all Filecoin actors.

The payment channel state structure looks like this:

// A given payment channel actor is established by From (the receipent of a service)
// to enable off-chain microtransactions to To (the provider of a service) to be reconciled
// and tallied on chain.
type State struct {
	// Channel owner, who has created and funded the actor - the channel sender
	From addr.Address
	// Recipient of payouts from channel
	To addr.Address

	// Amount successfully redeemed through the payment channel, paid out on `Collect()`
	ToSend abi.TokenAmount

	// Height at which the channel can be `Collected`
	SettlingAt abi.ChainEpoch
	// Height before which the channel `ToSend` cannot be collected
	MinSettleHeight abi.ChainEpoch

	// Collections of lane states for the channel, maintained in ID order.
	LaneStates []*LaneState
}

Before continuing with the details of the Payment Channel and its components and features, it is worth defining a few terms.

  • Voucher: a signed message created by either of the two channel parties that updates the channel balance. To differentiate to the payment channel sender/recipient, we refer to the voucher parties as voucher sender/recipient, who might or might not be the same as the payment channel ones (i.e., the voucher sender might be either the payment channel recipient or the payment channel sender).
  • Redeeming a voucher: the voucher MUST be submitted on-chain by the opposite party from the one that created it. Redeeming a voucher does not trigger movement of funds from the channel to the recipient’s account, but it does incur message/gas costs. Vouchers can be redeemed at any time up to Collect (see below), as long as it has got a higher Nonce than a previously submitted one.
  • UpdateChannelState: this is the process by which a voucher is redeemed, i.e., a voucher is submitted (but not cashed-out) on-chain.
  • Settle: this process starts closing the channel. It can be called by either the channel creator (sender) or the channel recipient.
  • Collect: with this process funds are eventually transferred from the payment channel sender to the payment channel recipient. This process incurs message/gas costs.

Vouchers

Traditionally, in order to transact through a Payment Channel, the payment channel parties send to each other signed messages that update the balance of the channel. In Filecoin, these signed messages are called vouchers.

Throughout the interaction between the two parties, the channel sender (From address) is sending vouchers to the recipient (To address). The Value included in the voucher indicates the value available for the receiving party to redeem. The Value is based on the service that the payment channel recipient has provided to the payment channel sender. Either the payment channel recipient or the payment channel sender can Update the balance of the channel and the balance ToSend to the payment channel recipient (using a voucher), but the Update (i.e., the voucher) has to be accepted by the other party before funds can be collected. Furthermore, the voucher has to be redeemed by the opposite party from the one that issued the voucher. The payment channel recipient can choose to Collect this balance at any time incurring the corresponding gas cost.

Redeeming a voucher is not transferring funds from the payment channel to the recipient’s account. Instead, redeeming a voucher denotes the fact that some service worth of Value has been provided by the payment channel recipient to the payment channel sender. It is not until the whole payment channel is collected that the funds are dispatched to the provider’s account.

This is the structure of the voucher:

// A voucher can be created and sent by any of the two parties. The `To` payment channel address can redeem the voucher and then `Collect` the funds.
type SignedVoucher struct {
	// ChannelAddr is the address of the payment channel this signed voucher is valid for
	ChannelAddr addr.Address
	// TimeLockMin sets a min epoch before which the voucher cannot be redeemed
	TimeLockMin abi.ChainEpoch
	// TimeLockMax sets a max epoch beyond which the voucher cannot be redeemed
	// TimeLockMax set to 0 means no timeout
	TimeLockMax abi.ChainEpoch
	// (optional) The SecretPreImage is used by `To` to validate
	SecretPreimage []byte
	// (optional) Extra can be specified by `From` to add a verification method to the voucher
	Extra *ModVerifyParams
	// Specifies which lane the Voucher is added to (will be created if does not exist)
	Lane uint64
	// Nonce is set by `From` to prevent redemption of stale vouchers on a lane
	Nonce uint64
	// Amount voucher can be redeemed for
	Amount big.Int
	// (optional) MinSettleHeight can extend channel MinSettleHeight if needed
	MinSettleHeight abi.ChainEpoch

	// (optional) Set of lanes to be merged into `Lane`
	Merges []Merge

	// Sender's signature over the voucher
	Signature *crypto.Signature
}

Over the course of a transaction cycle, each participant in the payment channel can send Vouchers to the other participant.

For instance, if the payment channel sender (From address) has sent to the payment channel recipient (To address) the following three vouchers (voucher_val, voucher_nonce) for a lane with 100 FIL to be redeemed: (10, 1), (20, 2), (30, 3), then the recipient could choose to redeem (30, 3) bringing the lane’s value to 70 (100 - 30) and cancelling the preceding vouchers, i.e., they would not be able to redeem (10, 1) or (20, 2) anymore. However, they could redeem (20, 2), that is, 20 FIL, and then follow up with (30, 3) to redeem the remaining 10 FIL later.

It is worth highlighting that while the Nonce is a strictly increasing value to denote the sequence of vouchers issued within the remit of a payment channel, the Value is not a strictly increasing value. Decreasing Value (although expected rarely) can be realized in cases of refunds that need to flow in the direction from the payment channel recipient to the payment channel sender. This can be the case when some bits arrive corrupted in the case of file retrieval, for instance.

Vouchers are signed by the party that creates them and are authenticated using a (Secret, PreImage) pair provided by the paying party (channel sender). If the PreImage is indeed a pre-image of the Secret when used as input to some given algorithm (typically a one-way function like a hash), the Voucher is valid. The Voucher itself contains the PreImage but not the Secret (communicated separately to the receiving party). This enables multi-hop payments since an intermediary cannot redeem a voucher on their own. Vouchers can also be used to update the minimum height at which a channel will be settled (i.e., closed), or have TimeLocks to prevent voucher recipients from redeeming them too early. A channel can also have a MinCloseHeight to prevent it being closed prematurely (e.g. before the payment channel recipient has collected funds) by the payment channel creator/sender.

Once their transactions have completed, either party can choose to Settle (i.e., close) the channel. There is a 12hr period after Settle during which either party can submit any outstanding vouchers. Once the vouchers are submitted, either party can then call Collect. This will send the payment channel recipient the ToPay amount from the channel, and the channel sender (From address) will be refunded the remaining balance in the channel (if any).

Lanes

In addition, payment channels in Filecoin can be split into lanes created as part of updating the channel state with a payment voucher. Each lane has an associated nonce and amount of tokens it can be redeemed for. Lanes can be thought of as transactions for several different services provided by the channel recipient to the channel sender. The nonce plays the role of a sequence number of vouchers within a given lane, where a voucher with a higher nonce replaces a voucher with a lower nonce.

Payment channel lanes allow for a lot of accounting between parties to be done off-chain and reconciled via single updates to the payment channel. The multiple lanes enable two parties to use a single payment channel to adjudicate multiple independent sets of payments.

One example of such accounting is merging of lanes. When a pair of channel sender-recipient nodes have a payment channel established between them with many lanes, the channel recipient will have to pay gas cost for each one of the lanes in order to Collect funds. Merging of lanes allow the channel recipient to send a “merge” request to the channel sender to request merging of (some of the) lanes and consolidate the funds. This way, the recipient can reduce the overall gas cost. As an incentive for the channel sender to accept the merge lane request, the channel recipient can ask for a lower total value to balance out the gas cost. For instance, if the recipient has collected vouchers worth of 10 FIL from two lanes, say 5 from each, and the gas cost of submitting the vouchers for these funds is 2, then it can ask for 9 from the creator if the latter accepts to merge the two lanes. This way, the channel sender pays less overall for the services it received and the channel recipient pays less gas cost to submit the voucher for the services they provided.

Lifecycle of a Payment Channel

Summarising, we have the following sequence:

  1. Two parties agree to a series of transactions (for instance as part of file retrieval) with one party paying the other party up to some total sum of Filecoin over time. This is part of the deal-phase, it takes place off-chain and does not (at this stage) involve payment channels.
  2. The Payment Channel Actor is used, called the payment channel sender (who is the recipient of some service, e.g., file in case of file retrieval) to create the payment channel and deposit funds.
  3. Any of the two parties can create vouchers to send to the other party.
  4. The voucher recipient saves the voucher locally. Each voucher has to be submitted by the opposite party from the one that created the voucher.
  5. Either immediately or later, the voucher recipient “redeems” the voucher by submitting it to the chain, calling UpdateChannelState
  6. The channel sender or the channel recipient Settle the payment channel.
  7. 12-hour period to close the channel begins.
  8. If any of the two parties have outstanding (i.e., non-redeemed) vouchers, they should now submit the vouchers to the chain (there should be the option of this being done automatically). If the channel recipient so desires, they should send a “merge lanes” request to the sender.
  9. 12-hour period ends.
  10. Either the channel sender or the channel recipient calls Collect.
  11. Funds are transferred to the channel recipient’s account and any unclaimed balance goes back to channel sender.

Payment Channels as part of the Filecoin Retrieval

Payment Channels are used in the Filecoin Retrieval Market to enable efficient off-chain payments and accounting between parties for what is expected to be a series of microtransactions, as these occur during data retrieval.

In particular, given that there is no proving method provided for the act of sending data from a provider (miner) to a client, there is no trust anchor between the two. Therefore, in order to avoid mis-behaviour, Filecoin is making use of payment channels in order to realise a step-wise “data transfer <-> payment” relationship between the data provider and the client (data receiver). Clients issue requests for data that miners are responding to. The miner is entitled to ask for interim payments, the volume-oriented interval for which is agreed in the Deal phase. In order to facilitate this process, the Filecoin client is creating a payment channel once the provider has agreed on the proposed deal. The client should also lock monetary value in the payment channel equal to the one needed for retrieval of the entire block of data requested. Every time a provider is completing transfer of the pre-specified amount of data, they can request a payment. The client is responding to this payment with a voucher which the provider can redeem (immediately or later), as per the process described earlier.

package paychmgr

import (
	"context"
	"fmt"

	"github.com/ipfs/go-cid"
	"golang.org/x/xerrors"

	"github.com/filecoin-project/go-address"
	cborutil "github.com/filecoin-project/go-cbor-util"
	actorstypes "github.com/filecoin-project/go-state-types/actors"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/builtin/v8/paych"

	"github.com/filecoin-project/lotus/api"
	lpaych "github.com/filecoin-project/lotus/chain/actors/builtin/paych"
	"github.com/filecoin-project/lotus/chain/types"
	"github.com/filecoin-project/lotus/lib/sigs"
)

// insufficientFundsErr indicates that there are not enough funds in the
// channel to create a voucher
type insufficientFundsErr interface {
	Shortfall() types.BigInt
}

type ErrInsufficientFunds struct {
	shortfall types.BigInt
}

func newErrInsufficientFunds(shortfall types.BigInt) *ErrInsufficientFunds {
	return &ErrInsufficientFunds{shortfall: shortfall}
}

func (e *ErrInsufficientFunds) Error() string {
	return fmt.Sprintf("not enough funds in channel to cover voucher - shortfall: %d", e.shortfall)
}

func (e *ErrInsufficientFunds) Shortfall() types.BigInt {
	return e.shortfall
}

type laneState struct {
	redeemed big.Int
	nonce    uint64
}

func (ls laneState) Redeemed() (big.Int, error) {
	return ls.redeemed, nil
}

func (ls laneState) Nonce() (uint64, error) {
	return ls.nonce, nil
}

// channelAccessor is used to simplify locking when accessing a channel
type channelAccessor struct {
	from address.Address
	to   address.Address

	// chctx is used by background processes (eg when waiting for things to be
	// confirmed on chain)
	chctx         context.Context
	sa            *stateAccessor
	api           managerAPI
	store         *Store
	lk            *channelLock
	fundsReqQueue []*fundsReq
	msgListeners  msgListeners
}

func newChannelAccessor(pm *Manager, from address.Address, to address.Address) *channelAccessor {
	return &channelAccessor{
		from:         from,
		to:           to,
		chctx:        pm.ctx,
		sa:           pm.sa,
		api:          pm.pchapi,
		store:        pm.store,
		lk:           &channelLock{globalLock: &pm.lk},
		msgListeners: newMsgListeners(),
	}
}

func (ca *channelAccessor) messageBuilder(ctx context.Context, from address.Address) (lpaych.MessageBuilder, error) {
	nwVersion, err := ca.api.StateNetworkVersion(ctx, types.EmptyTSK)
	if err != nil {
		return nil, err
	}

	av, err := actorstypes.VersionForNetwork(nwVersion)
	if err != nil {
		return nil, err
	}
	return lpaych.Message(av, from), nil
}

func (ca *channelAccessor) getChannelInfo(ctx context.Context, addr address.Address) (*ChannelInfo, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	return ca.store.ByAddress(ctx, addr)
}

func (ca *channelAccessor) outboundActiveByFromTo(ctx context.Context, from, to address.Address) (*ChannelInfo, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	return ca.store.OutboundActiveByFromTo(ctx, ca.api, from, to)
}

// createVoucher creates a voucher with the given specification, setting its
// nonce, signing the voucher and storing it in the local datastore.
// If there are not enough funds in the channel to create the voucher, returns
// the shortfall in funds.
func (ca *channelAccessor) createVoucher(ctx context.Context, ch address.Address, voucher paych.SignedVoucher) (*api.VoucherCreateResult, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	// Find the channel for the voucher
	ci, err := ca.store.ByAddress(ctx, ch)
	if err != nil {
		return nil, xerrors.Errorf("failed to get channel info by address: %w", err)
	}

	// Set the voucher channel
	sv := &voucher
	sv.ChannelAddr = ch

	// Get the next nonce on the given lane
	sv.Nonce = ca.nextNonceForLane(ci, voucher.Lane)

	// Sign the voucher
	vb, err := sv.SigningBytes()
	if err != nil {
		return nil, xerrors.Errorf("failed to get voucher signing bytes: %w", err)
	}

	sig, err := ca.api.WalletSign(ctx, ci.Control, vb)
	if err != nil {
		return nil, xerrors.Errorf("failed to sign voucher: %w", err)
	}
	sv.Signature = sig

	// Store the voucher
	if _, err := ca.addVoucherUnlocked(ctx, ch, sv, types.NewInt(0)); err != nil {
		// If there are not enough funds in the channel to cover the voucher,
		// return a voucher create result with the shortfall
		var ife insufficientFundsErr
		if xerrors.As(err, &ife) {
			return &api.VoucherCreateResult{
				Shortfall: ife.Shortfall(),
			}, nil
		}

		return nil, xerrors.Errorf("failed to persist voucher: %w", err)
	}

	return &api.VoucherCreateResult{Voucher: sv, Shortfall: types.NewInt(0)}, nil
}

func (ca *channelAccessor) nextNonceForLane(ci *ChannelInfo, lane uint64) uint64 {
	var maxnonce uint64
	for _, v := range ci.Vouchers {
		if v.Voucher.Lane == lane {
			if v.Voucher.Nonce > maxnonce {
				maxnonce = v.Voucher.Nonce
			}
		}
	}

	return maxnonce + 1
}

func (ca *channelAccessor) checkVoucherValid(ctx context.Context, ch address.Address, sv *paych.SignedVoucher) (map[uint64]lpaych.LaneState, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	return ca.checkVoucherValidUnlocked(ctx, ch, sv)
}

func (ca *channelAccessor) checkVoucherValidUnlocked(ctx context.Context, ch address.Address, sv *paych.SignedVoucher) (map[uint64]lpaych.LaneState, error) {
	if sv.ChannelAddr != ch {
		return nil, xerrors.Errorf("voucher ChannelAddr doesn't match channel address, got %s, expected %s", sv.ChannelAddr, ch)
	}

	// check voucher is unlocked
	if sv.Extra != nil {
		return nil, xerrors.Errorf("voucher is Message Locked")
	}
	if sv.TimeLockMax != 0 {
		return nil, xerrors.Errorf("voucher is Max Time Locked")
	}
	if sv.TimeLockMin != 0 {
		return nil, xerrors.Errorf("voucher is Min Time Locked")
	}
	if len(sv.SecretHash) != 0 {
		return nil, xerrors.Errorf("voucher is Hash Locked")
	}

	// Load payment channel actor state
	act, pchState, err := ca.sa.loadPaychActorState(ctx, ch)
	if err != nil {
		return nil, err
	}

	// Load channel "From" account actor state
	f, err := pchState.From()
	if err != nil {
		return nil, err
	}

	from, err := ca.api.ResolveToDeterministicAddress(ctx, f, nil)
	if err != nil {
		return nil, err
	}

	// verify voucher signature
	vb, err := sv.SigningBytes()
	if err != nil {
		return nil, err
	}

	// TODO: technically, either party may create and sign a voucher.
	// However, for now, we only accept them from the channel creator.
	// More complex handling logic can be added later
	if err := sigs.Verify(sv.Signature, from, vb); err != nil {
		return nil, err
	}

	// Check the voucher against the highest known voucher nonce / value
	laneStates, err := ca.laneState(ctx, pchState, ch)
	if err != nil {
		return nil, err
	}

	// If the new voucher nonce value is less than the highest known
	// nonce for the lane
	ls, lsExists := laneStates[sv.Lane]
	if lsExists {
		n, err := ls.Nonce()
		if err != nil {
			return nil, err
		}

		if sv.Nonce <= n {
			return nil, fmt.Errorf("nonce too low")
		}

		// If the voucher amount is less than the highest known voucher amount
		r, err := ls.Redeemed()
		if err != nil {
			return nil, err
		}
		if sv.Amount.LessThanEqual(r) {
			return nil, fmt.Errorf("voucher amount is lower than amount for voucher with lower nonce")
		}
	}

	// Total redeemed is the total redeemed amount for all lanes, including
	// the new voucher
	// eg
	//
	// lane 1 redeemed:            3
	// lane 2 redeemed:            2
	// voucher for lane 1:         5
	//
	// Voucher supersedes lane 1 redeemed, therefore
	// effective lane 1 redeemed:  5
	//
	// lane 1:  5
	// lane 2:  2
	//          -
	// total:   7
	totalRedeemed, err := ca.totalRedeemedWithVoucher(laneStates, sv)
	if err != nil {
		return nil, err
	}

	// Total required balance must not exceed actor balance
	if act.Balance.LessThan(totalRedeemed) {
		return nil, newErrInsufficientFunds(types.BigSub(totalRedeemed, act.Balance))
	}

	if len(sv.Merges) != 0 {
		return nil, fmt.Errorf("dont currently support paych lane merges")
	}

	return laneStates, nil
}

func (ca *channelAccessor) checkVoucherSpendable(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, secret []byte) (bool, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	recipient, err := ca.getPaychRecipient(ctx, ch)
	if err != nil {
		return false, err
	}

	ci, err := ca.store.ByAddress(ctx, ch)
	if err != nil {
		return false, err
	}

	// Check if voucher has already been submitted
	submitted, err := ci.wasVoucherSubmitted(sv)
	if err != nil {
		return false, err
	}
	if submitted {
		return false, nil
	}

	mb, err := ca.messageBuilder(ctx, recipient)
	if err != nil {
		return false, err
	}

	mes, err := mb.Update(ch, sv, secret)
	if err != nil {
		return false, err
	}

	ret, err := ca.api.Call(ctx, mes, nil)
	if err != nil {
		return false, err
	}

	if ret.MsgRct.ExitCode != 0 {
		return false, nil
	}

	return true, nil
}

func (ca *channelAccessor) getPaychRecipient(ctx context.Context, ch address.Address) (address.Address, error) {
	_, state, err := ca.api.GetPaychState(ctx, ch, nil)
	if err != nil {
		return address.Address{}, err
	}

	return state.To()
}

func (ca *channelAccessor) addVoucher(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, minDelta types.BigInt) (types.BigInt, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	return ca.addVoucherUnlocked(ctx, ch, sv, minDelta)
}

func (ca *channelAccessor) addVoucherUnlocked(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, minDelta types.BigInt) (types.BigInt, error) {
	ci, err := ca.store.ByAddress(ctx, ch)
	if err != nil {
		return types.BigInt{}, err
	}

	// Check if the voucher has already been added
	for _, v := range ci.Vouchers {
		eq, err := cborutil.Equals(sv, v.Voucher)
		if err != nil {
			return types.BigInt{}, err
		}
		if eq {
			// Ignore the duplicate voucher.
			log.Warnf("AddVoucher: voucher re-added")
			return types.NewInt(0), nil
		}

	}

	// Check voucher validity
	laneStates, err := ca.checkVoucherValidUnlocked(ctx, ch, sv)
	if err != nil {
		return types.NewInt(0), err
	}

	// The change in value is the delta between the voucher amount and
	// the highest previous voucher amount for the lane
	laneState, exists := laneStates[sv.Lane]
	redeemed := big.NewInt(0)
	if exists {
		redeemed, err = laneState.Redeemed()
		if err != nil {
			return types.NewInt(0), err
		}
	}

	delta := types.BigSub(sv.Amount, redeemed)
	if minDelta.GreaterThan(delta) {
		return delta, xerrors.Errorf("addVoucher: supplied token amount too low; minD=%s, D=%s; laneAmt=%s; v.Amt=%s", minDelta, delta, redeemed, sv.Amount)
	}

	ci.Vouchers = append(ci.Vouchers, &VoucherInfo{
		Voucher: sv,
	})

	if ci.NextLane <= sv.Lane {
		ci.NextLane = sv.Lane + 1
	}

	return delta, ca.store.putChannelInfo(ctx, ci)
}

func (ca *channelAccessor) submitVoucher(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, secret []byte) (cid.Cid, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	ci, err := ca.store.ByAddress(ctx, ch)
	if err != nil {
		return cid.Undef, err
	}

	has, err := ci.hasVoucher(sv)
	if err != nil {
		return cid.Undef, err
	}

	// If the channel has the voucher
	if has {
		// Check that the voucher hasn't already been submitted
		submitted, err := ci.wasVoucherSubmitted(sv)
		if err != nil {
			return cid.Undef, err
		}
		if submitted {
			return cid.Undef, xerrors.Errorf("cannot submit voucher that has already been submitted")
		}
	}

	mb, err := ca.messageBuilder(ctx, ci.Control)
	if err != nil {
		return cid.Undef, err
	}

	msg, err := mb.Update(ch, sv, secret)
	if err != nil {
		return cid.Undef, err
	}

	smsg, err := ca.api.MpoolPushMessage(ctx, msg, nil)
	if err != nil {
		return cid.Undef, err
	}

	// If the channel didn't already have the voucher
	if !has {
		// Add the voucher to the channel
		ci.Vouchers = append(ci.Vouchers, &VoucherInfo{
			Voucher: sv,
		})
	}

	// Mark the voucher and any lower-nonce vouchers as having been submitted
	err = ca.store.MarkVoucherSubmitted(ctx, ci, sv)
	if err != nil {
		return cid.Undef, err
	}

	return smsg.Cid(), nil
}

func (ca *channelAccessor) allocateLane(ctx context.Context, ch address.Address) (uint64, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	return ca.store.AllocateLane(ctx, ch)
}

func (ca *channelAccessor) listVouchers(ctx context.Context, ch address.Address) ([]*VoucherInfo, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	// TODO: just having a passthrough method like this feels odd. Seems like
	// there should be some filtering we're doing here
	return ca.store.VouchersForPaych(ctx, ch)
}

// laneState gets the LaneStates from chain, then applies all vouchers in
// the data store over the chain state
func (ca *channelAccessor) laneState(ctx context.Context, state lpaych.State, ch address.Address) (map[uint64]lpaych.LaneState, error) {
	// TODO: we probably want to call UpdateChannelState with all vouchers to be fully correct
	//  (but technically dont't need to)

	laneCount, err := state.LaneCount()
	if err != nil {
		return nil, err
	}

	// Note: we use a map instead of an array to store laneStates because the
	// client sets the lane ID (the index) and potentially they could use a
	// very large index.
	laneStates := make(map[uint64]lpaych.LaneState, laneCount)
	err = state.ForEachLaneState(func(idx uint64, ls lpaych.LaneState) error {
		laneStates[idx] = ls
		return nil
	})
	if err != nil {
		return nil, err
	}

	// Apply locally stored vouchers
	vouchers, err := ca.store.VouchersForPaych(ctx, ch)
	if err != nil && err != ErrChannelNotTracked {
		return nil, err
	}

	for _, v := range vouchers {
		for range v.Voucher.Merges {
			return nil, xerrors.Errorf("paych merges not handled yet")
		}

		// Check if there is an existing laneState in the payment channel
		// for this voucher's lane
		ls, ok := laneStates[v.Voucher.Lane]

		// If the voucher does not have a higher nonce than the existing
		// laneState for this lane, ignore it
		if ok {
			n, err := ls.Nonce()
			if err != nil {
				return nil, err
			}
			if v.Voucher.Nonce < n {
				continue
			}
		}

		// Voucher has a higher nonce, so replace laneState with this voucher
		laneStates[v.Voucher.Lane] = laneState{v.Voucher.Amount, v.Voucher.Nonce}
	}

	return laneStates, nil
}

// Get the total redeemed amount across all lanes, after applying the voucher
func (ca *channelAccessor) totalRedeemedWithVoucher(laneStates map[uint64]lpaych.LaneState, sv *paych.SignedVoucher) (big.Int, error) {
	// TODO: merges
	if len(sv.Merges) != 0 {
		return big.Int{}, xerrors.Errorf("dont currently support paych lane merges")
	}

	total := big.NewInt(0)
	for _, ls := range laneStates {
		r, err := ls.Redeemed()
		if err != nil {
			return big.Int{}, err
		}
		total = big.Add(total, r)
	}

	lane, ok := laneStates[sv.Lane]
	if ok {
		// If the voucher is for an existing lane, and the voucher nonce
		// is higher than the lane nonce
		n, err := lane.Nonce()
		if err != nil {
			return big.Int{}, err
		}

		if sv.Nonce > n {
			// Add the delta between the redeemed amount and the voucher
			// amount to the total
			r, err := lane.Redeemed()
			if err != nil {
				return big.Int{}, err
			}

			delta := big.Sub(sv.Amount, r)
			total = big.Add(total, delta)
		}
	} else {
		// If the voucher is *not* for an existing lane, just add its
		// value (implicitly a new lane will be created for the voucher)
		total = big.Add(total, sv.Amount)
	}

	return total, nil
}

func (ca *channelAccessor) settle(ctx context.Context, ch address.Address) (cid.Cid, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	ci, err := ca.store.ByAddress(ctx, ch)
	if err != nil {
		return cid.Undef, err
	}

	mb, err := ca.messageBuilder(ctx, ci.Control)
	if err != nil {
		return cid.Undef, err
	}
	msg, err := mb.Settle(ch)
	if err != nil {
		return cid.Undef, err
	}
	smgs, err := ca.api.MpoolPushMessage(ctx, msg, nil)
	if err != nil {
		return cid.Undef, err
	}

	ci.Settling = true
	err = ca.store.putChannelInfo(ctx, ci)
	if err != nil {
		log.Errorf("Error marking channel as settled: %s", err)
	}

	return smgs.Cid(), err
}

func (ca *channelAccessor) collect(ctx context.Context, ch address.Address) (cid.Cid, error) {
	ca.lk.Lock()
	defer ca.lk.Unlock()

	ci, err := ca.store.ByAddress(ctx, ch)
	if err != nil {
		return cid.Undef, err
	}

	mb, err := ca.messageBuilder(ctx, ci.Control)
	if err != nil {
		return cid.Undef, err
	}

	msg, err := mb.Collect(ch)
	if err != nil {
		return cid.Undef, err
	}

	smsg, err := ca.api.MpoolPushMessage(ctx, msg, nil)
	if err != nil {
		return cid.Undef, err
	}

	return smsg.Cid(), nil
}
A voucher is sent by From to To off-chain in order to enable To to redeem payments on-chain in the future
type SignedVoucher struct {
	// ChannelAddr is the address of the payment channel this signed voucher is valid for
	ChannelAddr addr.Address
	// TimeLockMin sets a min epoch before which the voucher cannot be redeemed
	TimeLockMin abi.ChainEpoch
	// TimeLockMax sets a max epoch beyond which the voucher cannot be redeemed
	// TimeLockMax set to 0 means no timeout
	TimeLockMax abi.ChainEpoch
	// (optional) The SecretPreImage is used by `To` to validate
	SecretPreimage []byte
	// (optional) Extra can be specified by `From` to add a verification method to the voucher
	Extra *ModVerifyParams
	// Specifies which lane the Voucher merges into (will be created if does not exist)
	Lane uint64
	// Nonce is set by `From` to prevent redemption of stale vouchers on a lane
	Nonce uint64
	// Amount voucher can be redeemed for
	Amount big.Int
	// (optional) MinSettleHeight can extend channel MinSettleHeight if needed
	MinSettleHeight abi.ChainEpoch

	// (optional) Set of lanes to be merged into `Lane`
	Merges []Merge

	// Sender's signature over the voucher
	Signature *crypto.Signature
}
package paych

import (
	"bytes"

	addr "github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/cbor"
	"github.com/filecoin-project/go-state-types/exitcode"
	paych0 "github.com/filecoin-project/specs-actors/actors/builtin/paych"
	paych7 "github.com/filecoin-project/specs-actors/v7/actors/builtin/paych"

	"github.com/ipfs/go-cid"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime"
	"github.com/filecoin-project/specs-actors/v8/actors/util/adt"
)

const (
	ErrChannelStateUpdateAfterSettled = exitcode.FirstActorSpecificExitCode + iota
)

type Actor struct{}

func (a Actor) Exports() []interface{} {
	return []interface{}{
		builtin.MethodConstructor: a.Constructor,
		2:                         a.UpdateChannelState,
		3:                         a.Settle,
		4:                         a.Collect,
	}
}

func (a Actor) Code() cid.Cid {
	return builtin.PaymentChannelActorCodeID
}

func (a Actor) State() cbor.Er {
	return new(State)
}

var _ runtime.VMActor = Actor{}

//type ConstructorParams struct {
//	From addr.Address // Payer
//	To   addr.Address // Payee
//}
type ConstructorParams = paych0.ConstructorParams

// Constructor creates a payment channel actor. See State for meaning of params.
func (pca *Actor) Constructor(rt runtime.Runtime, params *ConstructorParams) *abi.EmptyValue {
	// Only InitActor can create a payment channel actor. It creates the actor on
	// behalf of the payer/payee.
	rt.ValidateImmediateCallerType(builtin.InitActorCodeID)

	// check that both parties are capable of signing vouchers
	to, err := pca.resolveAccount(rt, params.To)
	builtin.RequireNoErr(rt, err, exitcode.Unwrap(err, exitcode.ErrIllegalState), "failed to resolve to address: %s", params.To)
	from, err := pca.resolveAccount(rt, params.From)
	builtin.RequireNoErr(rt, err, exitcode.Unwrap(err, exitcode.ErrIllegalState), "failed to resolve from address: %s", params.From)

	emptyArr, err := adt.MakeEmptyArray(adt.AsStore(rt), LaneStatesAmtBitwidth)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to create empty array")
	emptyArrCid, err := emptyArr.Root()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to persist empty array")

	st := ConstructState(from, to, emptyArrCid)
	rt.StateCreate(st)

	return nil
}

// Resolves an address to a canonical ID address and requires it to address an account actor.
func (pca *Actor) resolveAccount(rt runtime.Runtime, raw addr.Address) (addr.Address, error) {
	resolved, err := builtin.ResolveToIDAddr(rt, raw)
	if err != nil {
		return addr.Undef, exitcode.ErrIllegalState.Wrapf("failed to resolve address %v: %w", raw, err)
	}

	codeCID, ok := rt.GetActorCodeCID(resolved)
	if !ok {
		return addr.Undef, exitcode.ErrIllegalArgument.Wrapf("no code for address %v", resolved)
	}
	if codeCID != builtin.AccountActorCodeID {
		return addr.Undef, exitcode.ErrForbidden.Wrapf("actor %v must be an account (%v), was %v", raw,
			builtin.AccountActorCodeID, codeCID)
	}

	return resolved, nil
}

////////////////////////////////////////////////////////////////////////////////
// Payment Channel state operations
////////////////////////////////////////////////////////////////////////////////

type UpdateChannelStateParams = paych7.UpdateChannelStateParams
type SignedVoucher = paych7.SignedVoucher

func VoucherSigningBytes(t *SignedVoucher) ([]byte, error) {
	osv := *t
	osv.Signature = nil

	buf := new(bytes.Buffer)
	if err := osv.MarshalCBOR(buf); err != nil {
		return nil, err
	}

	return buf.Bytes(), nil
}

// Modular Verification method
//type ModVerifyParams struct {
//	// Actor on which to invoke the method.
//	Actor addr.Address
//	// Method to invoke.
//	Method abi.MethodNum
//	// Pre-serialized method parameters.
//	Params []byte
//}
type ModVerifyParams = paych0.ModVerifyParams

// Specifies which `Lane`s to be merged with what `Nonce` on channelUpdate
//type Merge struct {
//	Lane  uint64
//	Nonce uint64
//}
type Merge = paych0.Merge

func (pca Actor) UpdateChannelState(rt runtime.Runtime, params *UpdateChannelStateParams) *abi.EmptyValue {
	var st State
	rt.StateReadonly(&st)

	// both parties must sign voucher: one who submits it, the other explicitly signs it
	rt.ValidateImmediateCallerIs(st.From, st.To)
	var signer addr.Address
	if rt.Caller() == st.From {
		signer = st.To
	} else {
		signer = st.From
	}
	sv := params.Sv

	if sv.Signature == nil {
		rt.Abortf(exitcode.ErrIllegalArgument, "voucher has no signature")
	}

	if st.SettlingAt != 0 && rt.CurrEpoch() >= st.SettlingAt {
		rt.Abortf(ErrChannelStateUpdateAfterSettled, "no vouchers can be processed after SettlingAt epoch")
	}

	if len(params.Secret) > MaxSecretSize {
		rt.Abortf(exitcode.ErrIllegalArgument, "secret must be at most 256 bytes long")
	}

	vb, err := VoucherSigningBytes(&sv)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to serialize signedvoucher")

	err = rt.VerifySignature(*sv.Signature, signer, vb)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "voucher signature invalid")

	pchAddr := rt.Receiver()
	svpchIDAddr, found := rt.ResolveAddress(sv.ChannelAddr)
	if !found {
		rt.Abortf(exitcode.ErrIllegalArgument, "voucher payment channel address %s does not resolve to an ID address", sv.ChannelAddr)
	}
	if pchAddr != svpchIDAddr {
		rt.Abortf(exitcode.ErrIllegalArgument, "voucher payment channel address %s does not match receiver %s", svpchIDAddr, pchAddr)
	}

	if rt.CurrEpoch() < sv.TimeLockMin {
		rt.Abortf(exitcode.ErrIllegalArgument, "cannot use this voucher yet!")
	}

	if sv.TimeLockMax != 0 && rt.CurrEpoch() > sv.TimeLockMax {
		rt.Abortf(exitcode.ErrIllegalArgument, "this voucher has expired!")
	}

	if sv.Amount.Sign() < 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "voucher amount must be non-negative, was %v", sv.Amount)
	}

	if len(sv.SecretHash) > 0 {
		hashedSecret := rt.HashBlake2b(params.Secret)
		if !bytes.Equal(hashedSecret[:], sv.SecretHash) {
			rt.Abortf(exitcode.ErrIllegalArgument, "incorrect secret!")
		}
	}

	if sv.Extra != nil {

		code := rt.Send(
			sv.Extra.Actor,
			sv.Extra.Method,
			builtin.CBORBytes(sv.Extra.Data),
			abi.NewTokenAmount(0),
			&builtin.Discard{},
		)
		builtin.RequireSuccess(rt, code, "spend voucher verification failed")
	}

	rt.StateTransaction(&st, func() {
		laneFound := true

		lstates, err := adt.AsArray(adt.AsStore(rt), st.LaneStates, LaneStatesAmtBitwidth)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load lanes")

		// Find the voucher lane, creating if necessary.
		laneId := sv.Lane
		laneState := findLane(rt, lstates, sv.Lane)

		if laneState == nil {
			laneState = &LaneState{
				Redeemed: big.Zero(),
				Nonce:    0,
			}
			laneFound = false
		}

		if laneFound {
			if laneState.Nonce >= sv.Nonce {
				rt.Abortf(exitcode.ErrIllegalArgument, "voucher has an outdated nonce, existing nonce: %d, voucher nonce: %d, cannot redeem",
					laneState.Nonce, sv.Nonce)
			}
		}

		// The next section actually calculates the payment amounts to update the payment channel state
		// 1. (optional) sum already redeemed value of all merging lanes
		redeemedFromOthers := big.Zero()
		for _, merge := range sv.Merges {
			if merge.Lane == sv.Lane {
				rt.Abortf(exitcode.ErrIllegalArgument, "voucher cannot merge lanes into its own lane")
			}

			otherls := findLane(rt, lstates, merge.Lane)
			if otherls == nil {
				rt.Abortf(exitcode.ErrIllegalArgument, "voucher specifies invalid merge lane %v", merge.Lane)
				return // makes linters happy
			}

			if otherls.Nonce >= merge.Nonce {
				rt.Abortf(exitcode.ErrIllegalArgument, "merged lane in voucher has outdated nonce, cannot redeem")
			}

			redeemedFromOthers = big.Add(redeemedFromOthers, otherls.Redeemed)
			otherls.Nonce = merge.Nonce
			err = lstates.Set(merge.Lane, otherls)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store lane %d", merge.Lane)
		}

		// 2. To prevent double counting, remove already redeemed amounts (from
		// voucher or other lanes) from the voucher amount
		laneState.Nonce = sv.Nonce
		balanceDelta := big.Sub(sv.Amount, big.Add(redeemedFromOthers, laneState.Redeemed))
		// 3. set new redeemed value for merged-into lane
		laneState.Redeemed = sv.Amount

		newSendBalance := big.Add(st.ToSend, balanceDelta)

		// 4. check operation validity
		if newSendBalance.LessThan(big.Zero()) {
			rt.Abortf(exitcode.ErrIllegalArgument, "voucher would leave channel balance negative")
		}
		if newSendBalance.GreaterThan(rt.CurrentBalance()) {
			rt.Abortf(exitcode.ErrIllegalArgument, "not enough funds in channel to cover voucher")
		}

		// 5. add new redemption ToSend
		st.ToSend = newSendBalance

		// update channel settlingAt and MinSettleHeight if delayed by voucher
		if sv.MinSettleHeight != 0 {
			if st.SettlingAt != 0 && st.SettlingAt < sv.MinSettleHeight {
				st.SettlingAt = sv.MinSettleHeight
			}
			if st.MinSettleHeight < sv.MinSettleHeight {
				st.MinSettleHeight = sv.MinSettleHeight
			}
		}

		err = lstates.Set(laneId, laneState)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store lane", laneId)

		st.LaneStates, err = lstates.Root()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save lanes")
	})
	return nil
}

func (pca Actor) Settle(rt runtime.Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
	var st State
	rt.StateTransaction(&st, func() {
		rt.ValidateImmediateCallerIs(st.From, st.To)

		if st.SettlingAt != 0 {
			rt.Abortf(exitcode.ErrIllegalState, "channel already settling")
		}

		st.SettlingAt = rt.CurrEpoch() + SettleDelay
		if st.SettlingAt < st.MinSettleHeight {
			st.SettlingAt = st.MinSettleHeight
		}
	})
	return nil
}

func (pca Actor) Collect(rt runtime.Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
	var st State
	rt.StateReadonly(&st)
	rt.ValidateImmediateCallerIs(st.From, st.To)

	if st.SettlingAt == 0 || rt.CurrEpoch() < st.SettlingAt {
		rt.Abortf(exitcode.ErrForbidden, "payment channel not settling or settled")
	}

	// send ToSend to "To"
	codeTo := rt.Send(
		st.To,
		builtin.MethodSend,
		nil,
		st.ToSend,
		&builtin.Discard{},
	)
	builtin.RequireSuccess(rt, codeTo, "Failed to send funds to `To`")

	// the remaining balance will be returned to "From" upon deletion.
	rt.DeleteActor(st.From)

	return nil
}

// Returns the insertion index for a lane ID, with the matching lane state if found, or nil.
func findLane(rt runtime.Runtime, ls *adt.Array, id uint64) *LaneState {
	if id > MaxLane {
		rt.Abortf(exitcode.ErrIllegalArgument, "maximum lane ID is 2^63-1")
	}

	var out LaneState
	found, err := ls.Get(id, &out)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load lane %d", id)

	if !found {
		return nil
	}

	return &out
}

Multisig Wallet & Actor

The Multisig actor is a single actor representing a group of Signers. Signers may be external users, other Multisigs, or even the Multisig itself. There should be a maximum of 256 signers in a multisig wallet. In case more signers are needed, then the multisigs should be combined into a tree.

The implementation of the Multisig Actor can be found here.

The Multisig Actor statuses can be found here.

Storage Mining

The Storage Mining System is the part of the Filecoin Protocol that deals with storing Client’s data and producing proof artifacts that demonstrate correct storage behavior.

Storage Mining is one of the most central parts of the Filecoin protocol overall, as it provides all the required consensus algorithms based on proven storage power in the network. Miners are selected to mine blocks and extend the blockchain based on the storage power that they have committed to the network. Storage is added in unit of sectors and sectors are promises to the network that some storage will remain for a promised duration. In order to participate in Storage Mining, the storage miners have to: i) Add storage to the system, and ii) Prove that they maintain a copy of the data they have agreed to throughout the sector’s lifetime.

Storing data and producing proofs is a complex, highly optimizable process, with lots of tunable choices. Miners should explore the design space to arrive at something that (a) satisfies protocol and network-wide constraints, (b) satisfies clients’ requests and expectations (as expressed in Deals), and (c) gives them the most cost-effective operation. This part of the Filecoin Spec primarily describes in detail what MUST and SHOULD happen here, and leaves ample room for various optimizations for implementers, miners, and users to make. In some parts, we describe algorithms that could be replaced by other, more optimized versions, but in those cases it is important that the protocol constraints are satisfied. The protocol constraints are spelled out in clear detail. It is up to implementers who deviate from the algorithms presented here to ensure their modifications satisfy those constraints, especially those relating to protocol security.

Sector

Sectors are the basic units of storage on Filecoin. They have standard sizes, as well as well-defined time-increments for commitments. The size of a sector balances security concerns against usability. A sectorʼs lifetime is determined in the storage market, and sets the promised duration of the sector.

In the first iteration of the protocol, 32GiB and 64GiB sectors are supported. Maximum sector lifetime is determined by the proof algorithm. Maximum sector lifetime is initially 18 months. A sector naturally expires when it reaches the end of its lifetime. Additionally, the miner can extend the lifetime of their sectors. Rewards are earned and collaterals recovered when the miner fulfils their commitment.

Individual deals are formed when a storage miner and client are matched on Filecoinʼs storage market. The protocol does not distinguish miners matching with real clients from miners generating self-deals. However, committed capacity is a construction that is introduced to make self-dealing unnecessary and economically irrational. In earlier designs of the network, only sectors filled with deals increased the minerʼs likelihood of winning the block reward. This led to the expectation that miners would attack and exploit the network by playing the role of both storage provider and client, creating a malicious self-deal.

If a sector is only partially full of deals, the network considers the remainder to be committed capacity. Similarly, sectors with no deals are called committed capacity sectors; miners are rewarded for proving to the network that they are pledging storage capacity and are encouraged to find clients who need storage. When a miner finds storage demand, they can upgrade their committed capacity sectors to earn additional revenue in the form of a deal fee from paying clients. More details on how to add storage and upgrade sectors in Adding Storage.

Committed capacity sectors improve minersʼ incentives to store client data, but they donʼt solve the problem entirely. Storing real client files adds some operational overhead for storage miners. In certain circumstances – for example, if a miner values block rewards far more than deal fees – miners might still choose to ignore client data entirely and simply store committed capacity to increase their storage power as rapidly as possible in pursuit of block rewards. This would make Filecoin less useful and limit clientsʼ ability to store data on the network. Filecoin addresses this issue by introducing the concept of verified clients. Verified clients are certified by a decentralized network of verifiers. Once verified, they can post a predetermined amount of verified client deal data to the storage market, set by the size of their DataCap. Sectors with verified client deals are awarded more storage power – and therefore more block rewards – than sectors without. This provides storage miners with an additional incentive to store client data.

Verification is not intended to be scarce – it will be very easy to acquire for anyone with real data to store on Filecoin. Even though verifiers may allocate verified client DataCaps liberally (yet responsibly and transparently) to make onboarding easier, the overall effect should be a dramatic increase in the proportion of useful data stored on Filecoin.

Once a sector is full (either with client data or as committed capacity), the unsealed sector is combined by a proving tree into a single root UnsealedSectorCID. The sealing process then encodes (using CBOR) an unsealed sector into a sealed sector, with the root SealedSectorCID.

This diagram shows the composition of an unsealed sector and a sealed sector.

Unsealed Sectors and Sealed Sectors

Sector Storage & Window PoSt

The Lotus implementation of the Window PoSt scheduler can be found here and the actual execution of Window PoSt on a sector can be found here.

The Lotus block store implementation for sectors can be found here.

Sector Lifecycle

Once the sector has been generated and the deal has been incorporated into the Filecoin blockchain, the storage miner begins generating Proofs-of-Spacetime (PoSt) on the sector, starting to potentially win block rewards and also earn storage fees. Parameters are set so that miners generate and capture more value if they guarantee that their sectors will be around for the duration of the original contract. However, some bounds are placed on a sectorʼs lifetime to improve the network performance.

In particular, as sectors of shorter lifetime are added, the networkʼs capacity can be bottlenecked. The reason is that the chainʼs bandwidth is consumed with new sectors only replacing expiring ones. As a result, a minimum sector lifetime of six months was introduced to more effectively utilize chain bandwidth and miners have the incentive to commit to sectors of longer lifetime. The maximum sector lifetime is limited by the security of the present proofs construction. For a given set of proofs and parameters, the security of Filecoinʼs Proof-of-Replication (PoRep) is expected to decrease as sector lifetimes increase.

It is reasonable to assume that miners enter the network by adding Committed Capacity sectors, that is, sectors that do not contain user data. Once miners agree storage deals with clients, they upgrade their sectors to Regular Sectors. Alternatively, if they find Verified Clients and agree a storage deal with them, they upgrade their sector accordingly. Depending on whether or not a sector includes a (verified) deal, the miner acquires the corresponding storage power in the network.

All sectors are expected to remain live until the end of their sector lifetime and early dropping of sectors will result in slashing. This is done to provide clients a certain level of guarantee on the reliability of their hosted data. Sector termination can comes with a corresponding termination fee.

As with every system it is expected that sectors will present faults. Although this might degrade the quality offered by the network, the reaction of the miner to the fault drives system decisions on whether or not the miner should be penalized. A miner can recover the faulty sector, let the system terminate the sector automatically after 42 days of faults, or proactively terminate the sector immediately in the case of unrecoverable data loss. In case of a faulty sector, a small penalty fee approximately equal to the block reward that the sector would win per day is applied. The fee is calculated per day of the sector being unavailable to the network, i.e. until the sector is recovered or terminated.

Miners can extend the lifetime of a sector at any time, though the sector will be expected to remain live until it has reached the end of the new sector lifetime. This can be done by submitting a ExtendedSectorExpiration message to the chain.

A sector can be in one of the following states.

State Description
Precommitted Miner seals sector and submits miner.PreCommitSector or miner.PreCommitSectorBatch
Committed Miner generates a Seal proof and submits miner.ProveCommitSector or miner.ProveCommitAggregate
Active Miner generate valid PoSt proofs and timely submits miner.SubmitWindowedPoSt
Faulty Miner fails to generate a proof (see Fault section)
Recovering Miner declared a faulty sector via miner.DeclareFaultRecovered
Terminated Either sector is expired, or early terminated by a miner via miner.TerminateSectors, or was failed to be proven for 42 consecutive proving periods.

Sector Quality

Given different sector contents, not all sectors have the same usefulness to the network. The notion of Sector Quality distinguishes between sectors with heuristics indicating the presence of valuable data. That distinction is used to allocate more subsidies to higher-quality sectors. To quantify the contribution of a sector to the consensus power of the network, some relevant parameters are described here.

  • Sector Spacetime: This measurement is the sector size multiplied by its promised duration in byte-epochs.
  • Deal Weight: This weight converts spacetime occupied by deals into consensus power. Deal weight of verified client deals in a sector is called Verified Deal Weight and will be greater than the regular deal weight.
  • Deal Quality Multiplier: This factor is assigned to different deal types (committed capacity, regular deals, and verified client deals) to reward different content.
  • Sector Quality Multiplier: Sector quality is assigned on Activation (the epoch when the miner starts proving theyʼre storing the file). The sector quality multiplier is computed as an average of deal quality multipliers (committed capacity, regular deals, and verified client deals), weighted by the amount of spacetime each type of deal occupies in the sector.
$SectorQualityMultiplier = \frac{\sum\nolimits_{deals} DealWeight * DealQualityMultiplier}{SectorSpaceTime}$
  • Raw Byte Power: This measurement is the size of a sector in bytes.
  • Quality-Adjusted Power: This parameter measures the consensus power of stored data on the network, and is equal to Raw Byte Power multiplied by Sector Quality Multiplier.

The multipliers for committed capacity and regular deals are equal to make self dealing irrational in the current configuration of the protocol. In the future, it may make sense to pick different values, depending on other ways of preventing attacks becoming available.

The high quality multiplier and easy verification process for verified client deals facilitates decentralization of miner power. Unlike other proof-of-work-based protocols, like Bitcoin, central control of the network is not simply decided based on the resources that a new participant can bring. In Filecoin, accumulating control either requires significantly more resources or some amount of consent from verified clients, who must make deals with the centralized miners for them to increase their influence. Verified client mechanisms add a layer of social trust to a purely resource driven network. As long as the process is fair and transparent with accountability and bounded trust, abuse can be contained and minimized. A high sector quality multiplier is a very powerful leverage for clients to push storage providers to build features that will be useful to the network as a whole and increase the networkʼs long-term value. The verification process and DataCap allocation are meant to evolve over time as the community learns to automate and improve this process. An illustration of sectors with various contents and their respective sector qualities are shown in the following Figure.

Sector Quality

Sector Quality Adjusted Power is a weighted average of the quality of its space and it is based on the size, duration and quality of its deals.

Name Description
QualityBaseMultiplier (QBM) Multiplier for power for storage without deals.
DealWeightMultiplier (DWM) Multiplier for power for storage with deals.
VerifiedDealWeightMultiplier (VDWM) Multiplier for power for storage with verified deals.

The formula for calculating Sector Quality Adjusted Power (or QAp, often referred to as power) makes use of the following factors:

  • dealSpaceTime: sum of the duration*size of each deal
  • verifiedSpaceTime: sum of the duration*size of each verified deal
  • baseSpaceTime (spacetime without deals): sectorSize*sectorDuration - dealSpaceTime - verifiedSpaceTime

Based on these the average quality of a sector is:

$avgQuality = \frac{baseSpaceTime*QBM + dealSpaceTime*DWM + verifiedSpaceTime*VDWM}{sectorSize*sectorDuration*QBM}$

The Sector Quality Adjusted Power is:

$sectorQuality = avgQuality*size$

During miner.PreCommitSector and miner.PreCommitSectorBatch, the sector quality is calculated and stored in the sector information.

Sector Sealing

Before a Sector can be used, the Miner must seal the Sector: encode the data in the Sector to prepare it for the proving process.

  • Unsealed Sector: A Sector of raw data.
    • UnsealedCID (CommD): The root hash of the Unsealed Sector’s merkle tree. Also called CommD, or “data commitment.”
  • Sealed Sector: A Sector that has been encoded to prepare it for the proving process.
    • SealedCID (CommR): The root hash of the Sealed Sector’s merkle tree. Also called CommR, or “replica commitment.”

Sealing a sector through Proof-of-Replication (PoRep) is a computation-intensive process that results in a unique encoding of the sector. Once data is sealed, storage miners: generate a proof; run a SNARK on the proof to compress it; and finally, submit the result of the compression to the blockchain as a certification of the storage commitment. Depending on the PoRep algorithm and protocol security parameters, cost profiles and performance characteristics vary and tradeoffs have to be made among sealing cost, security, onchain footprint, retrieval latency and so on. However, sectors can be sealed with commercial hardware and sealing cost is expected to decrease over time. The Filecoin Protocol will launch with Stacked Depth Robust (SDR) PoRep with a planned upgrade to Narrow Stacked Expander (NSE) PoRep with improvement in both cost and retrieval latency.

The Lotus-specific set of functions applied to the sealing of a sector can be found here.

Randomness

Randomness is an important attribute that helps the network verify the integrity of Miners’ stored data. Filecoin’s block creation process includes two types of randomness:

  • DRAND: Values pulled from a distributed random beacon
  • VRF: The output of a Verifiable Random Function (VRF), which takes the previous block’s VRF value and produces the current block’s VRF value.

Each block produced in Filecoin includes values pulled from these two sources of randomness.

When Miners submit proofs about their stored data, the proofs incorporate references to randomness added at specific epochs. Assuming these values were not able to be predicted ahead of time, this helps ensure that Miners generated proofs at a specific point in time.

There are two proof types. Each uses one of the two sources of randomness:

  • Windowed PoSt: Uses Drand values
  • Proof of Replication (PoRep): Uses VRF values
Drawing randomness for sector commitments

Tickets are used as input to calculation of the ReplicaID in order to tie Proofs-of-Replication to a given chain, thereby preventing long-range attacks (from another miner in the future trying to reuse SEALs).

The ticket has to be drawn from a finalized block in order to prevent the miner from potential losing storage (in case of a chain reorg) even though their storage is intact.

Verification should ensure that the ticket was drawn no farther back than necessary by the miner. We note that tickets can uniquely be associated with a given round in the protocol (lest a hash collision be found), but that the round number is explicited by the miner in commitSector.

We present precisely how ticket selection and verification should work. In the below, we use the following notation:

  • F– Finality (number of rounds)
  • X– round in which SEALing starts
  • Z– round in which the SEAL appears (in a block)
  • Y– round announced in the SEAL commitSector (should be X, but a miner could use any Y <= X), denoted by the ticket selection
  • T– estimated time for SEAL, dependent on sector size
  • G = T + variance– necessary flexibility to account for network delay and SEAL-time variance.

We expect Filecoin will be able to produce estimates for sector commitment time based on sector sizes, e.g.: (estimate, variance) <--- SEALTime(sectors) G and T will be selected using these.

Picking a Ticket to Seal: When starting to prepare a SEAL in round X, the miner should draw a ticket from X-F with which to compute the SEAL.

Verifying a Seal’s ticket: When verifying a SEAL in round Z, a verifier should ensure that the ticket used to generate the SEAL is found in the range of rounds [Z-T-F-G, Z-T-F+G].

                               Prover
           ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─

         X-F ◀───────F────────▶ X ◀──────────T─────────▶ Z
     -G   .  +G                 .                        .
  ───(┌───────┐)───────────────( )──────────────────────( )────────▶
      └───────┘                 '                        '        time
 [Z-T-F-G, Z-T-F+G]

          └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
                              Verifier

Note that the prover here is submitting a message on chain (i.e. the SEAL). Using an older ticket than necessary to generate the SEAL is something the miner may do to gain more confidence about finality (since we are in a probabilistically final system). However it has a cost in terms of securing the chain in the face of long-range attacks (specifically, by mixing in chain randomness here, we ensure that an attacker going back a month in time to try and create their own chain would have to completely regenerate any and all sectors drawing randomness since to use for their fork’s power).

We break this down as follows:

  • The miner should draw from X-F.
  • The verifier wants to find what X-F should have been (to ensure the miner is not drawing from farther back) even though Y (i.e. the round of the ticket actually used) is an unverifiable value.
  • Thus, the verifier will need to make an inference about what X-F is likely to have been based on:
    • (known) round in which the message is received (Z)
    • (known) finality value (F)
    • (approximate) SEAL time (T)
  • Because T is an approximate value, and to account for network delay and variance in SEAL time across miners, the verifier allows for G offset from the assumed value of X-F: Z-T-F, hence verifying that the ticket is drawn from the range [Z-T-F-G, Z-T-F+G].

In Practice, the Filecoin protocol will include a MAX_SEAL_TIME for each sector size and proof type.

Sector Faults

It is very important for storage providers to have a strong incentive to both report the failure to the chain and attempt recovery from the fault in order to uphold the storage guarantee for the networkʼs clients. Without this incentive, it is impossible to distinguish an honest minerʼs hardware failure from malicious behavior, which is necessary to treat miners fairly. The size of the fault fees depend on the severity of the failure and the rewards that the miner is expected to earn from the sector to make sure incentives are aligned. The two types of sector storage fault fees are:

  • Sector fault fee: This fee is paid per sector per day while the sector is in a faulty state. This fee is not paid the first day the system detects the fault allowing a one day grace period for recovery without fee. The size of the sector fault fee is slightly more than the amount the sector is expected to earn per day in block rewards. If a sector remains faulty for more than 42 consecutive days, the sector will pay a termination fee and be removed from the chain state. As storage miner reliability increases above a reasonable threshold, the risk posed by these fees decreases rapidly.
  • Sector termination fee: A sector can be terminated before its expiration through automatic faults or miner decisions. A termination fee is charged that is, in principle, equivalent to how much a sector has earned so far, up to a limit in order to avoid discouraging long sector lifetimes. In an active termination, the miner decides to stop mining and they pay a fee to leave. In a fault termination, a sector is in a faulty state for too long, and the chain terminates the deal, returns unpaid deal fees to the client and penalizes the miner. Termination fee is currently capped at 90 days worth of block reward that a sector will earn. Miners are responsible for deciding to comply with local regulations, and may sometimes need to accept a termination fee for complying with content laws. Many of the concepts and parameters above make use of the notion of “how much a sector would have earned in a day” in order to understand and align incentives for participants. This concept is robustly tracked and extrapolated on chain.

Sector Recovery

Miners should try to recover faulty sectors in order to avoid paying the penalty, which is approximately equal to the block reward that the miner would receive from that sector. After fixing technical issues, the miner should call RecoveryDeclaration and produce the WindowPoSt challenge in order to regain the power from that sector.

Note that if a sector is in a faulty state for 42 consecutive days it will be terminated and the miner will receive a penalty. The miner can terminate the sector themselves by calling TerminationDeclaration, if they know that they cannot recover it, in which case they will receive a smaller penalty fee.

Both the RecoveryDeclaration and the TerminationDeclaration can be found in the miner actor implementation.

Adding Storage

A Miner adds more storage in the form of Sectors. Adding more storage is a two-step process:

  1. PreCommitting a Sector: A Miner publishes a Sector’s SealedCID, through miner.PreCommitSector of miner.PreCommitSectorBatch, and makes a deposit. The Sector is now registered to the Miner, and the Miner must ProveCommit the Sector or lose their deposit.
  2. ProveCommitting a Sector: The Miner provides a Proof of Replication (PoRep) for the Sector through miner.ProveCommitSector or miner.ProveCommitAggregate. This proof must be submitted AFTER a delay (the InteractiveEpoch), and BEFORE PreCommit expiration.

This two-step process provides assurance that the Miner’s PoRep actually proves that the Miner has replicated the Sector data and is generating proofs from it:

  • ProveCommitments must happen AFTER the InteractiveEpoch (150 blocks after Sector PreCommit), as the randomness included at that epoch is used in the PoRep.
  • ProveCommitments must happen BEFORE the PreCommit expiration, which is a boundary established to make sure Miners don’t have enough time to “fake” PoRep generation.

For each Sector successfully ProveCommitted, the Miner becomes responsible for continuously proving the existence of their Sectors’ data. In return, the Miner is awarded storage power.

Upgrading Sectors

Miners are granted storage power in exchange for the storage space they dedicate to Filecoin. Ideally, this storage space is used to store data on behalf of Clients, but there may not always be enough Clients to utilize all the space a Miner has to offer.

In order for a Miner to maximize storage power (and profit), they should take advantage of all available storage space immediately, even before they find enough Clients to use this space.

To facilitate this, there are two types of Sectors that may be sealed and ProveCommitted:

  • Regular Sector: A Sector that contains Client data
  • Committed Capacity (CC) Sector: A Sector with no data (all zeroes)

Miners are free to coose which types of Sectors to store. CC sectors, in particular, allow Miners to immediately make use of existing disk space: earning storage power and a higher chance at producing a block. Miners can decide if they should upgrade their CC sectors to take client deals or continue proving CC sectors. Currently, CC sectors store randomness by default in client implementation, but this does not preclude miners from storing any type of useful data that increase their private utility in CC sectors (as long as it is legal). The protocol expects that new use-cases and diversity will emerge out of such behaviour.

To incentivize Miners to hoard storage space and dedicate it to Filecoin, CC Sectors have a unique capability: they can be “upgraded” to Regular Sectors (also called “replacing a CC Sector”).

Miners upgrade their ProveCommitted CC Sectors by PreCommitting a Regular Sector, and specifying that it should replace an existing CC Sector. Once the Regular Sector is successfully ProveCommitted, it will replace the existing CC Sector. If the newly ProveCommitted Regular sector contains a Verified Client deal, i.e., a deal with higher Sector Quality, then the miner’s storage power will increase accordingly.

Upgrading capacity currently involves resealing, that is, creating a unique representation of the new data included in the Sector through a computationally intensive process. Looking ahead, committed capacity upgrades should eventually be possible without a reseal. A succinct and publicly verifiable proof that the committed capacity has been correctly replaced with replicated data should achieve this goal. However, this mechanism must be fully specified to preserve the security and incentives of the network before it can be implemented and is, therefore, left as a future improvement.

Storage Miner

Storage Mining Subsystem

The Filecoin Storage Mining Subsystem ensures a storage miner can effectively commit storage to the Filecoin protocol in order to both:

  • Participate in the Filecoin Storage Market by taking on client data and participating in storage deals.
  • Participate in Filecoin Storage Power Consensus by verifying and generating blocks to grow the Filecoin blockchain and earning block rewards and fees for doing so.

The above involves a number of steps to putting on and maintaining online storage, such as:

Filecoin Proofs

Proof of Replication

A Proof of Replication (PoRep) is a proof that a Miner has correctly generated a unique replica of some underlying data.

In practice, the underlying data is the raw data contained in an Unsealed Sector, and a PoRep is a SNARK proof that the sealing process was performed correctly to produce a Sealed Sector (See Sealing a Sector).

It is important to note that the replica should not only be unique to the miner, but also to the time when a miner has actually created the replica, i.e., sealed the sector. This means that if the same miner produces a sealed sector out of the same raw data twice, then this would count as a different replica.

When Miners commit to storing data, they must first produce a valid Proof of Replication.

Proof of Spacetime

A Proof of Spacetime (aka PoSt) is a long-term assurance of a Miner’s continuous storage of their Sectors’ data. This is not a single proof, but a collection of proofs the Miner has submitted over time. Periodically, a Miner must add to these proofs by submitting a WindowPoSt:

  • Fundamentally, a WindowPoSt is a collection of merkle proofs over the underlying data in a Miner’s Sectors.
  • WindowPoSts bundle proofs of various leaves across groups of Sectors (called Partitions).
  • These proofs are submitted as a single SNARK.

The historical and ongoing submission of WindowPoSts creates assurance that the Miner has been storing, and continues to store the Sectors they agreed to store in the storage deal.

Once a Miner successfully adds and ProveCommits a Sector, the Sector is assigned to a Deadline: a specific window of time during which PoSts must be submitted. The day is broken up into 48 individual Deadlines of 30 minutes each, and ProveCommitted Sectors are assigned to one of these 48 Deadlines.

  • PoSts may only be submitted for the currently-active Deadline. Deadlines are open for 30 minutes, starting from the Deadline’s “Open” epoch and ending at its “Close” epoch.
  • PoSts must incorporate randomness pulled from a random beacon. This randomness becomes publicly available at the Deadline’s “Challenge” epoch, which is 20 epochs prior to its “Open” epoch.
  • Deadlines also have a FaultCutoff epoch, 70 epochs prior to its “Open” epoch. After this epoch, Faults can no longer be declared for the Deadline’s Sectors.

Miner Accounting

A Miner’s financial gain or loss is affected by the following three actions:

  1. Miners deposit tokens to act as collateral for their PreCommitted and ProveCommitted Sectors
  2. Miners earn tokens from block rewards, when they are elected to mine a new block and extend the blockchain.
  3. Miners lose tokens if they fail to prove storage of a sector and are given penalties as a result.
Balance Requirements

A Miner’s token balance MUST cover ALL of the following:

  • PreCommit Deposits: When a Miner PreCommits a Sector, they must supply a “precommit deposit” for the Sector, which acts as collateral. If the Sector is not ProveCommitted on time, this deposit is removed and burned.
  • Initial Pledge: When a Miner ProveCommits a Sector, they must supply an “initial pledge” for the Sector, which acts as collateral. If the Sector is terminated, this deposit is removed and burned along with rewards earned by this sector up to a limit.
  • Locked Funds: When a Miner receives tokens from block rewards, the tokens are locked and added to the Miner’s vesting table to be unlocked linearly over some future epochs.
Faults, Penalties and Fee Debt

Faults

A Sector’s PoSts must be submitted on time, or that Sector is marked “faulty.” There are three types of faults:

  • Declared Fault: When the Miner explicitly declares a Sector “faulty” before its Deadline’s FaultCutoff. Recall that WindowPoSt proofs are submitted per partition for a specific ChallengeWindow. A miner has to declare the sector as faulty before the ChallengeWindow for the particular partition opens. Until the sectors are recovered they will be masked from proofs in subsequent proving periods.
  • Detected Fault: Partitions of sectors without PoSt proof verification records, which have not been declared faulty before the FaultCutoff epoch’s deadline are marked as detected faults.
  • Skipped Fault: If a sector is currently in active or recovering state and has not been declared faulty before, but the miner’s PoSt submission does not include a proof for this sector, then this is a “skipped fault” sector (also referred to as “skipped undeclared fault”). In other words, when a miner submits PoSt proofs for a partition but does not include proofs for some sectors in the partition, then these sectors are in “skipped fault” state. This is in contrast to the “detected fault” state, where the miner does not submit a PoSt proof for any section in the partition at all. The skipped fault is helpful in case a sector becomes faulty after the FaultCutoff epoch.

Note that the “skipped fault” allows for sector-wise fault penalties, as compared to partition-wide faults and penalties, as is the case with “detected faults”.

Deadlines

A deadline is a period of WPoStChallengeWindow epochs that divides a proving period. Sectors are assigned to a deadline on ProveCommit, by calling either miner.ProveCommitSector, or miner.ProveCommitAggregate, and will remain assigned to it throughout their lifetime. Recall that Sectors are also assigned to a partition.

A miner must submit a miner.SubmitWindowedPoSt for each deadline.

There are four relevant epochs associated to a deadline:

Name Distance from Open Description
Open 0 Epoch from which a PoSt Proof for this deadline can be submitted.
Close WPoStChallengeWindow Epoch after which a PoSt Proof for this deadline will be rejected.
FaultCutoff -FaultDeclarationCutoff Epoch after which a miner.DeclareFault and miner.DeclareFaultRecovered for sectors in the upcoming deadline are rejected.
Challenge -WPoStChallengeLookback Epoch at which the randomness for the challenges is available.

Fault Recovery

Regardless of how a fault first becomes known (declared, skipped, detected), the sector stays faulty and is excluded from future proofs until the miner explicitly declares it recovered. The declaration of recovery restores the sector to the proving set at the start of the subsequent proving period. When a PoSt for a just-recovered sector is received, power for that sector is restored.

Penalties

A Miner may accrue penalties for many reasons:

  • PreCommit Expiry Penalty: Occurs if a Miner fails to ProveCommit a PreCommitted Sector in time. This happens the first time that a miner declares that it proves a sector and falls into the PoRep consensus.
  • Undeclared Fault Penalty: Occurs if a Miner fails to submit a PoSt for a Sector on time. Depending on whether the “Skipped Fault” option is implemented, this penalty applies to either a sector or a whole partition.
  • Declared Fault Penalty: Occurs if a Miner fails to submit a PoSt for a Sector on time, but they declare the Sector faulty before the system finds out (in which case the fault falls in the “Undeclared Fault Penalty” above). This penalty fee should be lower than the undeclared fault penalty, in order to incentivize Miners to declare faults early.
  • Ongoing Fault Penalty: Occurs every Proving Period a Miner fails to submit a PoSt for a Sector.
  • Termination Penalty: Occurs if a Sector is terminated before its expiration.
  • Consensus Fault Penalty: Occurs if a Miner commits a consensus fault and is reported.

When a Miner accrues penalties, the amount penalized is tracked as “Fee Debt.” If a Miner has Fee Debt, they are restricted from certain actions until the amount owed is paid off. Miners with Fee Debt may not:

  • PreCommit new Sectors
  • Declare faulty Sectors “recovered”
  • Withdraw balance

Faults are implied to be “temporary” - that is, a Miner that temporarily loses internet connection may choose to declare some Sectors for their upcoming proving period as faulty, because the Miner knows they will eventually regain the ability to submit proofs for those Sectors. This declaration allows the Miner to still submit a valid proof for their Deadline (minus the faulty Sectors). This is very important for Miners, as missing a Deadline’s PoSt entirely incurs a high penalty.

Storage Mining Cycle

Block miners should constantly, on every epoch, be checking if they win the Secret Leader Election and in case they are elected, determine whether they can propose a block by running the Winning PoSt. Epochs are currently set to take 30 seconds, in order to account for Winning PoSt and network propagation around the world. The detailed steps for the above process can be found in the Secret Leader Election section.

Here we provide a detailed description of the mining cycle.

Active Miner Mining Cycle

In order to mine blocks on the Filecoin blockchain a miner must be running Block Validation at all times, keeping track of recent blocks received and the heaviest current chain (based on Expected Consensus).

With every new tipset, the miner can use their committed power to attempt to craft a new block.

For additional details around how consensus works in Filecoin, see Expected Consensus. For the purposes of this section, there is a consensus protocol (Expected Consensus) that guarantees a fair process for determining what blocks have been generated in a round, whether a miner is eligible to mine a block, and other rules pertaining to the production of some artifacts required of valid blocks (e.g. Tickets, WinningPoSt).

Mining Cycle

After the chain has caught up to the current head using ChainSync, the mining process is as follows, (we go into more detail on epoch timing below):

  • The node receives and transmits messages using the Message Syncer
  • At the same time the node receives blocks through BlockSync.
    • Each block has an associated timestamp and epoch (quantized time window in which it was crafted)
    • Blocks are validated as they come in block validation
  • After an epoch’s “cutoff”, the miner should take all the valid blocks received for this epoch and assemble them into tipsets according to Tipset validation rules
  • The miner then attempts to mine atop the heaviest tipset (as calculated with EC’s weight function) using its smallest ticket to run leader election
    • The miner runs Leader Election using the most recent random output by a drand beacon.
      • if this yields a valid ElectionProof, the miner generates a new ticket and winning PoSt for inclusion in the block.
      • the miner then assembles a new block (see “block creation” below) and waits until this epoch’s quantized timestamp to broadcast it

This process is repeated until either the Leader Election process yields a winning ticket (in EC) and the miner publishes a block or a new valid block comes in from the network.

At any height H, there are three possible situations:

  • The miner is eligible to mine a block: they produce their block and propagate it. They then resume mining at the next height H+1.
  • The miner is not eligible to mine a block but has received blocks: they form a Tipset with them and resume mining at the next height H+1.
  • The miner is not eligible to mine a block and has received no blocks: prompted by their clock they run leader election again, incrementing the epoch number.

Anytime a miner receives new valid blocks, it should evaluate what is the heaviest Tipset it knows about and mine atop it.

Epoch Timing

Mining Cycle Timing

The timing diagram above describes the sequence of block creation “mining”, propagation and reception.

This sequence of events applies only when the node is in the CHAIN_FOLLOW syncing mode. Nodes in other syncing modes do not mine blocks.

The upper row represents the conceptual consumption channel consisting of successive receiving periods Rx during which nodes validate incoming blocks. The lower row is the conceptual production channel made up of a period of mining M followed by a period of transmission Tx (which lasts long enough for blocks to propagate throughout the network). The lengths of the periods are not to scale.

The above diagram represents the important events within an epoch:

  • Epoch boundary: change of current epoch. New blocks mined are mined in new epoch, and timestamped accordingly.
  • Epoch cutoff: blocks from the prior epoch propagated on the network are no longer accepted. Miners can form a new tipset to mine on.

In an epoch, blocks are received and validated during Rx up to the prior epoch’s cutoff. At the cutoff, the miner computes the heaviest tipset from the blocks received during Rx, and uses it as the head to build on during the next mining period M. If mining is successful, the miner sets the block’s timestamp to the epoch boundary and waits until the boundary to release the block. While some blocks could be submitted a bit later, blocks are all transmitted during Tx, the transmission period.

The timing validation rules are as follows:

  • Blocks whose timestamps are not exactly on the epoch boundary are rejected.
  • Blocks received with a timestamp in the future are rejected.
  • Blocks received after the cutoff are rejected.
    • Note that those blocks are not invalid, just not considered for the miner’s own tipset building. Tipsets received with such a block as a parent should be accepted.

In a fully synchronized network most of period Rx does not see any network traffic, only its beginning should. While there may be variance in operator mining time, most miners are expected to finish mining by the epoch boundary.

Let’s look at an example, both use a block-time of 30s, and a cutoff at 15s.

  • T = 0: start of epoch n
  • T in [0, 15]: miner A receives, validates and propagates incoming blocks. Valid blocks should have timestamp 0.
  • T = 15: epoch cutoff for n-1, A assembles the heaviest tipset and starts mining atop it.
  • T = 25: A successfully generates a block, sets its timestamp to 30, and waits until the epoch boundary (at 30) to release it.
  • T = 30: start of epoch n + 1, A releases its block for epoch n.
  • T in [30, 45]: A receives and validates incoming blocks, their timestamp is 30.
  • T = 45: epoch cutoff for n, A forms tipsets and starts mining atop the heaviest.
  • T = 60: start of epoch n + 2.
  • T in [60, 75]: A receives and validates incoming blocks
  • T = 67: A successfully generates a block, sets it timestamp to 60 and releases it.
  • T = 75: epoch cutoff for n+1…

Above, in epoch n, A mines fast, in epoch n+1 A mines slow. So long as the miner’s block is between the epoch boundary and the cutoff, it will be accepted by other miners.

In practice miners should not be releasing blocks close to the epoch cutoff. Implementations may choose to locally randomize the exact time of the cutoff in order to prevent such behavior (while this means it may accept/reject blocks others do not, in practice this will not affect the miners submitting blocks on time).

Full Miner Lifecycle
Step 0: Registration and Market participation

To initially become a miner, a miner first registers a new miner actor on-chain. This is done through the storage power actor’s CreateStorageMiner method. The call will then create a new miner actor instance and return its address.

The next step is to place one or more storage market asks on the market. This is done off-chain as part of storage market functions. A miner may create a single ask for their entire storage, or partition their storage up in some way with multiple asks (at potentially different prices).

After that, they need to make deals with clients and begin filling up sectors with data. For more information on making deals, see the Storage Market. The miner will need to put up storage deal collateral for the deals they have entered into.

When they have a full sector, they should seal it. This is done by invoking the Sector Sealer.

Owner/Worker distinction

The miner actor has two distinct ‘controller’ addresses. One is the worker, which is the address which will be responsible for doing all of the work, submitting proofs, committing new sectors, and all other day to day activities. The owner address is the address that created the miner, paid the collateral, and has block rewards paid out to it. The reason for the distinction is to allow different parties to fulfil the different roles. One example would be for the owner to be a multisig wallet, or a cold storage key, and the worker key to be a ‘hot wallet’ key.

Changing Worker Addresses

Note that any change to worker keys after registration must be appropriately delayed in relation to randomness lookback for SEALing data (see this issue).

Step 1: Committing Sectors

When the miner has completed their first seal, they should post it on-chain using the Storage Miner Actor’s ProveCommitSector function. The miner will need to put up pledge collateral in proportion to the amount of storage they commit on chain. The miner will now gain power for this particular sector upon successful ProveCommitSector.

You can read more about sectors here and how sector relates to power here.

Step 2: Producing Blocks

Once the miner has power on the network, they are randomly chosen by the “Secret Leader Election” algorithm to mine and submit blocks proportionally to the power they hold, i.e., if a miner holds 3% of the overall network power they will be chosen in 3% of the cases. The winning miner is chosen by the system and the miner can prove that they were chosen by submitting an Election Proof.

When a miner is chosen to produce a block, they must submit a WinningPoSt proof. This process is as follows: an elected miner gets the randomness value through the DRAND randomness generator based on the current epoch and uses it to generate WinningPoSt.

WinningPoSt uses the randomness to select a sector for which the miner must generate a proof. If the miner is not able to generate this proof within some predefined amount of time, then they will not be able to create a block.

Faults

If a miner detects Storage Faults among their sectors (any sort of storage failure that would prevent them from crafting a PoSt), they should declare these faults as discussed earlier.

The miner will be unable to craft valid PoSt proofs over faulty sectors, thereby reducing their chances of being able to create a valid block (i.e., adding a Winning PoSt). By declaring a fault, the miner will no longer be challenged on that sector, and will lose power accordingly.

Step 3: Deal/Sector Expiration

In order to stop mining, a miner must complete all of its storage deals. Once all deals in a sector have expired, the sector itself will expire thereby enabling the miner to remove the associated collateral from their account.

Storage Miner Actor

Balance of Miner Actor should be greater than or equal to the sum of PreCommitDeposits and LockedFunds. It is possible for balance to fall below the sum of PCD, LF and InitialPledgeRequirements, and this is a bad state (IP Debt) that limits a miner actor’s behavior (i.e. no balance withdrawals) Excess balance as computed by st.GetAvailableBalance will be withdrawable or usable for pre-commit deposit or pledge lock-up.
type State struct {
	// Information not related to sectors.
	Info cid.Cid

	PreCommitDeposits abi.TokenAmount // Total funds locked as PreCommitDeposits
	LockedFunds       abi.TokenAmount // Total rewards and added funds locked in vesting table

	VestingFunds cid.Cid // VestingFunds (Vesting Funds schedule for the miner).

	FeeDebt abi.TokenAmount // Absolute value of debt this miner owes from unpaid fees

	InitialPledge abi.TokenAmount // Sum of initial pledge requirements of all active sectors

	// Sectors that have been pre-committed but not yet proven.
	PreCommittedSectors cid.Cid // Map, HAMT[SectorNumber]SectorPreCommitOnChainInfo

	// PreCommittedSectorsCleanUp maintains the state required to cleanup expired PreCommittedSectors.
	PreCommittedSectorsCleanUp cid.Cid // BitFieldQueue (AMT[Epoch]*BitField)

	// Allocated sector IDs. Sector IDs can never be reused once allocated.
	AllocatedSectors cid.Cid // BitField

	// Information for all proven and not-yet-garbage-collected sectors.
	//
	// Sectors are removed from this AMT when the partition to which the
	// sector belongs is compacted.
	Sectors cid.Cid // Array, AMT[SectorNumber]SectorOnChainInfo (sparse)

	// DEPRECATED. This field will change names and no longer be updated every proving period in a future upgrade
	// The first epoch in this miner's current proving period. This is the first epoch in which a PoSt for a
	// partition at the miner's first deadline may arrive. Alternatively, it is after the last epoch at which
	// a PoSt for the previous window is valid.
	// Always greater than zero, this may be greater than the current epoch for genesis miners in the first
	// WPoStProvingPeriod epochs of the chain; the epochs before the first proving period starts are exempt from Window
	// PoSt requirements.
	// Updated at the end of every period by a cron callback.
	ProvingPeriodStart abi.ChainEpoch

	// DEPRECATED. This field will be removed from state in a future upgrade.
	// Index of the deadline within the proving period beginning at ProvingPeriodStart that has not yet been
	// finalized.
	// Updated at the end of each deadline window by a cron callback.
	CurrentDeadline uint64

	// The sector numbers due for PoSt at each deadline in the current proving period, frozen at period start.
	// New sectors are added and expired ones removed at proving period boundary.
	// Faults are not subtracted from this in state, but on the fly.
	Deadlines cid.Cid

	// Deadlines with outstanding fees for early sector termination.
	EarlyTerminations bitfield.BitField

	// True when miner cron is active, false otherwise
	DeadlineCronActive bool
}
package miner

import (
	"bytes"
	"encoding/binary"
	"fmt"
	"math"

	miner7 "github.com/filecoin-project/specs-actors/v7/actors/builtin/miner"

	addr "github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-bitfield"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/cbor"
	"github.com/filecoin-project/go-state-types/crypto"
	"github.com/filecoin-project/go-state-types/dline"
	"github.com/filecoin-project/go-state-types/exitcode"
	rtt "github.com/filecoin-project/go-state-types/rt"
	miner0 "github.com/filecoin-project/specs-actors/actors/builtin/miner"
	miner2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/miner"
	miner3 "github.com/filecoin-project/specs-actors/v3/actors/builtin/miner"
	miner5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/miner"
	cid "github.com/ipfs/go-cid"
	cbg "github.com/whyrusleeping/cbor-gen"
	"golang.org/x/xerrors"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/builtin/market"
	"github.com/filecoin-project/specs-actors/v8/actors/builtin/power"
	"github.com/filecoin-project/specs-actors/v8/actors/builtin/reward"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime/proof"
	. "github.com/filecoin-project/specs-actors/v8/actors/util"
	"github.com/filecoin-project/specs-actors/v8/actors/util/adt"
	"github.com/filecoin-project/specs-actors/v8/actors/util/smoothing"
)

type Runtime = runtime.Runtime

const (
	// The first 1000 actor-specific codes are left open for user error, i.e. things that might
	// actually happen without programming error in the actor code.
	//ErrToBeDetermined = exitcode.FirstActorSpecificExitCode + iota

	// The following errors are particular cases of illegal state.
	// They're not expected to ever happen, but if they do, distinguished codes can help us
	// diagnose the problem.
	ErrBalanceInvariantBroken = 1000
)

type Actor struct{}

func (a Actor) Exports() []interface{} {
	return []interface{}{
		builtin.MethodConstructor: a.Constructor,
		2:                         a.ControlAddresses,
		3:                         a.ChangeWorkerAddress,
		4:                         a.ChangePeerID,
		5:                         a.SubmitWindowedPoSt,
		6:                         a.PreCommitSector,
		7:                         a.ProveCommitSector,
		8:                         a.ExtendSectorExpiration,
		9:                         a.TerminateSectors,
		10:                        a.DeclareFaults,
		11:                        a.DeclareFaultsRecovered,
		12:                        a.OnDeferredCronEvent,
		13:                        a.CheckSectorProven,
		14:                        a.ApplyRewards,
		15:                        a.ReportConsensusFault,
		16:                        a.WithdrawBalance,
		17:                        a.ConfirmSectorProofsValid,
		18:                        a.ChangeMultiaddrs,
		19:                        a.CompactPartitions,
		20:                        a.CompactSectorNumbers,
		21:                        a.ConfirmUpdateWorkerKey,
		22:                        a.RepayDebt,
		23:                        a.ChangeOwnerAddress,
		24:                        a.DisputeWindowedPoSt,
		25:                        a.PreCommitSectorBatch,
		26:                        a.ProveCommitAggregate,
		27:                        a.ProveReplicaUpdates,
	}
}

func (a Actor) Code() cid.Cid {
	return builtin.StorageMinerActorCodeID
}

func (a Actor) State() cbor.Er {
	return new(State)
}

var _ runtime.VMActor = Actor{}

/////////////////
// Constructor //
/////////////////

// Storage miner actors are created exclusively by the storage power actor. In order to break a circular dependency
// between the two, the construction parameters are defined in the power actor.
type ConstructorParams = power.MinerConstructorParams

func (a Actor) Constructor(rt Runtime, params *ConstructorParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.InitActorAddr)

	checkControlAddresses(rt, params.ControlAddrs)
	checkPeerInfo(rt, params.PeerId, params.Multiaddrs)

	if !CanWindowPoStProof(params.WindowPoStProofType) {
		rt.Abortf(exitcode.ErrIllegalArgument, "proof type %d not allowed for new miner actors", params.WindowPoStProofType)
	}

	owner := resolveControlAddress(rt, params.OwnerAddr)
	worker := resolveWorkerAddress(rt, params.WorkerAddr)
	controlAddrs := make([]addr.Address, 0, len(params.ControlAddrs))
	for _, ca := range params.ControlAddrs {
		resolved := resolveControlAddress(rt, ca)
		controlAddrs = append(controlAddrs, resolved)
	}

	currEpoch := rt.CurrEpoch()
	offset, err := assignProvingPeriodOffset(rt.Receiver(), currEpoch, rt.HashBlake2b)
	builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to assign proving period offset")
	periodStart := currentProvingPeriodStart(currEpoch, offset)
	builtin.RequireState(rt, periodStart <= currEpoch, "computed proving period start %d after current epoch %d", periodStart, currEpoch)
	deadlineIndex := currentDeadlineIndex(currEpoch, periodStart)
	builtin.RequireState(rt, deadlineIndex < WPoStPeriodDeadlines, "computed proving deadline index %d invalid", deadlineIndex)

	info, err := ConstructMinerInfo(owner, worker, controlAddrs, params.PeerId, params.Multiaddrs, params.WindowPoStProofType)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct initial miner info")
	infoCid := rt.StorePut(info)

	store := adt.AsStore(rt)
	state, err := ConstructState(store, infoCid, periodStart, deadlineIndex)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
	rt.StateCreate(state)

	return nil
}

/////////////
// Control //
/////////////

// type GetControlAddressesReturn struct {
// 	Owner        addr.Address
// 	Worker       addr.Address
// 	ControlAddrs []addr.Address
// }
type GetControlAddressesReturn = miner2.GetControlAddressesReturn

func (a Actor) ControlAddresses(rt Runtime, _ *abi.EmptyValue) *GetControlAddressesReturn {
	rt.ValidateImmediateCallerAcceptAny()
	var st State
	rt.StateReadonly(&st)
	info := getMinerInfo(rt, &st)
	return &GetControlAddressesReturn{
		Owner:        info.Owner,
		Worker:       info.Worker,
		ControlAddrs: info.ControlAddresses,
	}
}

//type ChangeWorkerAddressParams struct {
//	NewWorker       addr.Address
//	NewControlAddrs []addr.Address
//}
type ChangeWorkerAddressParams = miner0.ChangeWorkerAddressParams

// ChangeWorkerAddress will ALWAYS overwrite the existing control addresses with the control addresses passed in the params.
// If a nil addresses slice is passed, the control addresses will be cleared.
// A worker change will be scheduled if the worker passed in the params is different from the existing worker.
func (a Actor) ChangeWorkerAddress(rt Runtime, params *ChangeWorkerAddressParams) *abi.EmptyValue {
	checkControlAddresses(rt, params.NewControlAddrs)

	newWorker := resolveWorkerAddress(rt, params.NewWorker)

	var controlAddrs []addr.Address
	for _, ca := range params.NewControlAddrs {
		resolved := resolveControlAddress(rt, ca)
		controlAddrs = append(controlAddrs, resolved)
	}

	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)

		// Only the Owner is allowed to change the newWorker and control addresses.
		rt.ValidateImmediateCallerIs(info.Owner)

		// save the new control addresses
		info.ControlAddresses = controlAddrs

		// save newWorker addr key change request
		if newWorker != info.Worker && info.PendingWorkerKey == nil {
			info.PendingWorkerKey = &WorkerKeyChange{
				NewWorker:   newWorker,
				EffectiveAt: rt.CurrEpoch() + WorkerKeyChangeDelay,
			}
		}

		err := st.SaveInfo(adt.AsStore(rt), info)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not save miner info")
	})

	return nil
}

// Triggers a worker address change if a change has been requested and its effective epoch has arrived.
func (a Actor) ConfirmUpdateWorkerKey(rt Runtime, params *abi.EmptyValue) *abi.EmptyValue {
	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)

		// Only the Owner is allowed to change the newWorker.
		rt.ValidateImmediateCallerIs(info.Owner)

		processPendingWorker(info, rt, &st)
	})

	return nil
}

// Proposes or confirms a change of owner address.
// If invoked by the current owner, proposes a new owner address for confirmation. If the proposed address is the
// current owner address, revokes any existing proposal.
// If invoked by the previously proposed address, with the same proposal, changes the current owner address to be
// that proposed address.
func (a Actor) ChangeOwnerAddress(rt Runtime, newAddress *addr.Address) *abi.EmptyValue {
	if newAddress.Empty() {
		rt.Abortf(exitcode.ErrIllegalArgument, "empty address")
	}
	if newAddress.Protocol() != addr.ID {
		rt.Abortf(exitcode.ErrIllegalArgument, "owner address must be an ID address")
	}
	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)
		if rt.Caller() == info.Owner || info.PendingOwnerAddress == nil {
			// Propose new address.
			rt.ValidateImmediateCallerIs(info.Owner)
			info.PendingOwnerAddress = newAddress
		} else { // info.PendingOwnerAddress != nil
			// Confirm the proposal.
			// This validates that the operator can in fact use the proposed new address to sign messages.
			rt.ValidateImmediateCallerIs(*info.PendingOwnerAddress)
			if *newAddress != *info.PendingOwnerAddress {
				rt.Abortf(exitcode.ErrIllegalArgument, "expected confirmation of %v, got %v",
					info.PendingOwnerAddress, newAddress)
			}
			info.Owner = *info.PendingOwnerAddress
		}

		// Clear any resulting no-op change.
		if info.PendingOwnerAddress != nil && *info.PendingOwnerAddress == info.Owner {
			info.PendingOwnerAddress = nil
		}

		err := st.SaveInfo(adt.AsStore(rt), info)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save miner info")
	})
	return nil
}

//type ChangePeerIDParams struct {
//	NewID abi.PeerID
//}
type ChangePeerIDParams = miner0.ChangePeerIDParams

func (a Actor) ChangePeerID(rt Runtime, params *ChangePeerIDParams) *abi.EmptyValue {
	checkPeerInfo(rt, params.NewID, nil)

	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)

		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		info.PeerId = params.NewID
		err := st.SaveInfo(adt.AsStore(rt), info)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not save miner info")
	})
	return nil
}

//type ChangeMultiaddrsParams struct {
//	NewMultiaddrs []abi.Multiaddrs
//}
type ChangeMultiaddrsParams = miner0.ChangeMultiaddrsParams

func (a Actor) ChangeMultiaddrs(rt Runtime, params *ChangeMultiaddrsParams) *abi.EmptyValue {
	checkPeerInfo(rt, nil, params.NewMultiaddrs)

	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)

		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		info.Multiaddrs = params.NewMultiaddrs
		err := st.SaveInfo(adt.AsStore(rt), info)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not save miner info")
	})
	return nil
}

//////////////////
// WindowedPoSt //
//////////////////

//type PoStPartition struct {
//	// Partitions are numbered per-deadline, from zero.
//	Index uint64
//	// Sectors skipped while proving that weren't already declared faulty
//	Skipped bitfield.BitField
//}
type PoStPartition = miner0.PoStPartition

// Information submitted by a miner to provide a Window PoSt.
//type SubmitWindowedPoStParams struct {
//	// The deadline index which the submission targets.
//	Deadline uint64
//	// The partitions being proven.
//	Partitions []PoStPartition
//	// Array of proofs, one per distinct registered proof type present in the sectors being proven.
//	// In the usual case of a single proof type, this array will always have a single element (independent of number of partitions).
//	Proofs []proof.PoStProof
//	// The epoch at which these proofs is being committed to a particular chain.
//	// NOTE: This field should be removed in the future. See
//	// https://github.com/filecoin-project/specs-actors/issues/1094
//	ChainCommitEpoch abi.ChainEpoch
//	// The ticket randomness on the chain at the chain commit epoch.
//	ChainCommitRand abi.Randomness
//}
type SubmitWindowedPoStParams = miner0.SubmitWindowedPoStParams

// Invoked by miner's worker address to submit their fallback post
func (a Actor) SubmitWindowedPoSt(rt Runtime, params *SubmitWindowedPoStParams) *abi.EmptyValue {
	currEpoch := rt.CurrEpoch()
	store := adt.AsStore(rt)
	var st State

	// Verify that the miner has passed exactly 1 proof.
	if len(params.Proofs) != 1 {
		rt.Abortf(exitcode.ErrIllegalArgument, "expected exactly one proof, got %d", len(params.Proofs))
	}

	if !CanWindowPoStProof(params.Proofs[0].PoStProof) {
		rt.Abortf(exitcode.ErrIllegalArgument, "proof type %d not allowed", params.Proofs[0].PoStProof)
	}

	if params.Deadline >= WPoStPeriodDeadlines {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %d of %d", params.Deadline, WPoStPeriodDeadlines)
	}
	// Technically, ChainCommitRand should be _exactly_ 32 bytes. However:
	// 1. It's convenient to allow smaller slices when testing.
	// 2. Nothing bad will happen if the caller provides too little randomness.
	if len(params.ChainCommitRand) > abi.RandomnessLength {
		rt.Abortf(exitcode.ErrIllegalArgument, "expected at most %d bytes of randomness, got %d", abi.RandomnessLength, len(params.ChainCommitRand))
	}

	var postResult *PoStResult
	var info *MinerInfo
	rt.StateTransaction(&st, func() {
		info = getMinerInfo(rt, &st)
		maxProofSize, err := info.WindowPoStProofType.ProofSize()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to determine max window post proof size")

		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		// Make sure the miner is using the correct proof type.
		if params.Proofs[0].PoStProof != info.WindowPoStProofType {
			rt.Abortf(exitcode.ErrIllegalArgument, "expected proof of type %d, got proof of type %d", info.WindowPoStProofType, params.Proofs[0])
		}

		// Make sure the proof size doesn't exceed the max. We could probably check for an exact match, but this is safer.
		if maxSize := maxProofSize * uint64(len(params.Partitions)); uint64(len(params.Proofs[0].ProofBytes)) > maxSize {
			rt.Abortf(exitcode.ErrIllegalArgument, "expected proof to be smaller than %d bytes", maxSize)
		}

		// Validate that the miner didn't try to prove too many partitions at once.
		submissionPartitionLimit := loadPartitionsSectorsMax(info.WindowPoStPartitionSectors)
		if uint64(len(params.Partitions)) > submissionPartitionLimit {
			rt.Abortf(exitcode.ErrIllegalArgument, "too many partitions %d, limit %d", len(params.Partitions), submissionPartitionLimit)
		}

		currDeadline := st.DeadlineInfo(currEpoch)
		// Check that the miner state indicates that the current proving deadline has started.
		// This should only fail if the cron actor wasn't invoked, and matters only in case that it hasn't been
		// invoked for a whole proving period, and hence the missed PoSt submissions from the prior occurrence
		// of this deadline haven't been processed yet.
		if !currDeadline.IsOpen() {
			rt.Abortf(exitcode.ErrIllegalState, "proving period %d not yet open at %d", currDeadline.PeriodStart, currEpoch)
		}

		// The miner may only submit a proof for the current deadline.
		if params.Deadline != currDeadline.Index {
			rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %d at epoch %d, expected %d",
				params.Deadline, currEpoch, currDeadline.Index)
		}

		// Verify that the PoSt was committed to the chain at most WPoStChallengeLookback+WPoStChallengeWindow in the past.
		if params.ChainCommitEpoch < currDeadline.Challenge {
			rt.Abortf(exitcode.ErrIllegalArgument, "expected chain commit epoch %d to be after %d", params.ChainCommitEpoch, currDeadline.Challenge)
		}
		if params.ChainCommitEpoch >= currEpoch {
			rt.Abortf(exitcode.ErrIllegalArgument, "chain commit epoch %d must be less than the current epoch %d", params.ChainCommitEpoch, currEpoch)
		}
		// Verify the chain commit randomness.
		commRand := rt.GetRandomnessFromTickets(crypto.DomainSeparationTag_PoStChainCommit, params.ChainCommitEpoch, nil)
		if !bytes.Equal(commRand, params.ChainCommitRand) {
			rt.Abortf(exitcode.ErrIllegalArgument, "post commit randomness mismatched")
		}

		sectors, err := LoadSectors(store, st.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors")

		deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

		deadline, err := deadlines.LoadDeadline(store, params.Deadline)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", params.Deadline)

		// Record proven sectors/partitions, returning updates to power and the final set of sectors
		// proven/skipped.
		//
		// NOTE: This function does not actually check the proofs but does assume that they're correct. Instead,
		// it snapshots the deadline's state and the submitted proofs at the end of the challenge window and
		// allows third-parties to dispute these proofs.
		//
		// While we could perform _all_ operations at the end of challenge window, we do as we can here to avoid
		// overloading cron.
		faultExpiration := currDeadline.Last() + FaultMaxAge
		postResult, err = deadline.RecordProvenSectors(store, sectors, info.SectorSize, QuantSpecForDeadline(currDeadline), faultExpiration, params.Partitions)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to process post submission for deadline %d", params.Deadline)

		// Make sure we actually proved something.

		provenSectors, err := bitfield.SubtractBitField(postResult.Sectors, postResult.IgnoredSectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to determine proven sectors for deadline %d", params.Deadline)

		noSectors, err := provenSectors.IsEmpty()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to determine if any sectors were proven", params.Deadline)
		if noSectors {
			// Abort verification if all sectors are (now) faults. There's nothing to prove.
			// It's not rational for a miner to submit a Window PoSt marking *all* non-faulty sectors as skipped,
			// since that will just cause them to pay a penalty at deadline end that would otherwise be zero
			// if they had *not* declared them.
			rt.Abortf(exitcode.ErrIllegalArgument, "cannot prove partitions with no active sectors")
		}

		// If we're not recovering power, record the proof for optimistic verification.
		if postResult.RecoveredPower.IsZero() {
			err = deadline.RecordPoStProofs(store, postResult.Partitions, params.Proofs)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to record proof for optimistic verification", params.Deadline)
		} else {
			// otherwise, check the proof
			sectorInfos, err := sectors.LoadForProof(postResult.Sectors, postResult.IgnoredSectors)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors for post verification")

			err = verifyWindowedPost(rt, currDeadline.Challenge, sectorInfos, params.Proofs)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "window post failed")
		}

		err = deadlines.UpdateDeadline(store, params.Deadline, deadline)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update deadline %d", params.Deadline)

		err = st.SaveDeadlines(store, deadlines)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
	})

	// Restore power for recovered sectors. Remove power for new faults.
	// NOTE: It would be permissible to delay the power loss until the deadline closes, but that would require
	// additional accounting state.
	// https://github.com/filecoin-project/specs-actors/issues/414
	requestUpdatePower(rt, postResult.PowerDelta)

	rt.StateReadonly(&st)
	err := st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	return nil
}

// type DisputeWindowedPoStParams struct {
// 		Deadline  uint64
// 		PoStIndex uint64 // only one is allowed at a time to avoid loading too many sector infos.
// }
type DisputeWindowedPoStParams = miner3.DisputeWindowedPoStParams

func (a Actor) DisputeWindowedPoSt(rt Runtime, params *DisputeWindowedPoStParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
	reporter := rt.Caller()

	if params.Deadline >= WPoStPeriodDeadlines {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %d of %d", params.Deadline, WPoStPeriodDeadlines)
	}

	currEpoch := rt.CurrEpoch()

	// Note: these are going to be slightly inaccurate as time
	// will have moved on from when the post was actually
	// submitted.
	//
	// However, these are estimates _anyways_.
	epochReward := requestCurrentEpochBlockReward(rt)
	pwrTotal := requestCurrentTotalPower(rt)

	toBurn := abi.NewTokenAmount(0)
	toReward := abi.NewTokenAmount(0)
	pledgeDelta := abi.NewTokenAmount(0)
	powerDelta := NewPowerPairZero()
	var st State
	rt.StateTransaction(&st, func() {
		dlInfo := st.DeadlineInfo(currEpoch)
		if !deadlineAvailableForOptimisticPoStDispute(dlInfo.PeriodStart, params.Deadline, currEpoch) {
			rt.Abortf(exitcode.ErrForbidden, "can only dispute window posts during the dispute window (%d epochs after the challenge window closes)", WPoStDisputeWindow)
		}

		info := getMinerInfo(rt, &st)
		penalisedPower := NewPowerPairZero()
		store := adt.AsStore(rt)

		// Check proof
		{
			// Find the proving period start for the deadline in question.
			ppStart := dlInfo.PeriodStart
			if dlInfo.Index < params.Deadline {
				ppStart -= WPoStProvingPeriod
			}
			targetDeadline := NewDeadlineInfo(ppStart, params.Deadline, currEpoch)

			// Load the target deadline.
			deadlinesCurrent, err := st.LoadDeadlines(store)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

			dlCurrent, err := deadlinesCurrent.LoadDeadline(store, params.Deadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline")

			// Take the post from the snapshot for dispute.
			// This operation REMOVES the PoSt from the snapshot so
			// it can't be disputed again. If this method fails,
			// this operation must be rolled back.
			partitions, proofs, err := dlCurrent.TakePoStProofs(store, params.PoStIndex)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load proof for dispute")

			// Load the partition info we need for the dispute.
			disputeInfo, err := dlCurrent.LoadPartitionsForDispute(store, partitions)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load partition info for dispute")
			// This includes power that is no longer active (e.g., due to sector terminations).
			// It must only be used for penalty calculations, not power adjustments.
			penalisedPower = disputeInfo.DisputedPower

			// Load sectors for the dispute.
			sectors, err := LoadSectors(store, dlCurrent.SectorsSnapshot)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors snapshot array")

			sectorInfos, err := sectors.LoadForProof(disputeInfo.AllSectorNos, disputeInfo.IgnoredSectorNos)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors to dispute window post")

			// Check proof, we fail if validation succeeds.
			err = verifyWindowedPost(rt, targetDeadline.Challenge, sectorInfos, proofs)
			if err == nil {
				rt.Abortf(exitcode.ErrIllegalArgument, "failed to dispute valid post")
				return
			}
			rt.Log(rtt.INFO, "successfully disputed: %s", err)

			// Ok, now we record faults. This always works because
			// we don't allow compaction/moving sectors during the
			// challenge window.
			//
			// However, some of these sectors may have been
			// terminated. That's fine, we'll skip them.
			faultExpirationEpoch := targetDeadline.Last() + FaultMaxAge
			powerDelta, err = dlCurrent.RecordFaults(store, sectors, info.SectorSize, QuantSpecForDeadline(targetDeadline), faultExpirationEpoch, disputeInfo.DisputedSectors)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to declare faults")

			err = deadlinesCurrent.UpdateDeadline(store, params.Deadline, dlCurrent)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update deadline %d", params.Deadline)
			err = st.SaveDeadlines(store, deadlinesCurrent)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
		}

		// Penalties.
		{
			// Calculate the base penalty.
			penaltyBase := PledgePenaltyForInvalidWindowPoSt(
				epochReward.ThisEpochRewardSmoothed,
				pwrTotal.QualityAdjPowerSmoothed,
				penalisedPower.QA,
			)

			// Calculate the target reward.
			rewardTarget := RewardForDisputedWindowPoSt(info.WindowPoStProofType, penalisedPower)

			// Compute the target penalty by adding the
			// base penalty to the target reward. We don't
			// take reward out of the penalty as the miner
			// could end up receiving a substantial
			// portion of their fee back as a reward.
			penaltyTarget := big.Add(penaltyBase, rewardTarget)

			err := st.ApplyPenalty(penaltyTarget)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to apply penalty")
			penaltyFromVesting, penaltyFromBalance, err := st.RepayPartialDebtInPriorityOrder(store, currEpoch, rt.CurrentBalance())
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to pay debt")
			toBurn = big.Add(penaltyFromVesting, penaltyFromBalance)

			// Now, move as much of the target reward as
			// we can from the burn to the reward.
			toReward = big.Min(toBurn, rewardTarget)
			toBurn = big.Sub(toBurn, toReward)

			pledgeDelta = penaltyFromVesting.Neg()
		}
	})

	requestUpdatePower(rt, powerDelta)

	if !toReward.IsZero() {
		// Try to send the reward to the reporter.
		code := rt.Send(reporter, builtin.MethodSend, nil, toReward, &builtin.Discard{})

		// If we fail, log and burn the reward to make sure the balances remain correct.
		if !code.IsSuccess() {
			rt.Log(rtt.ERROR, "failed to send reward")
			toBurn = big.Add(toBurn, toReward)
		}
	}
	burnFunds(rt, toBurn, BurnMethodDisputeWindowedPoSt)
	notifyPledgeChanged(rt, pledgeDelta)
	rt.StateReadonly(&st)

	err := st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")
	return nil
}

///////////////////////
// Sector Commitment //
///////////////////////

//type SectorPreCommitInfo struct {
//	SealProof       abi.RegisteredSealProof
//	SectorNumber    abi.SectorNumber
//	SealedCID       cid.Cid `checked:"true"` // CommR
//	SealRandEpoch   abi.ChainEpoch
//	DealIDs         []abi.DealID
//	Expiration      abi.ChainEpoch
//	ReplaceCapacity bool                    // Must be false since v7
//	ReplaceSectorDeadline  uint64           // Unused since v7
//	ReplaceSectorPartition uint64           // Unused since v7
//	ReplaceSectorNumber    abi.SectorNumber // Unused since v7
//}
type PreCommitSectorParams = miner0.SectorPreCommitInfo

// Pledges to seal and commit a single sector.
// See PreCommitSectorBatch for details.
// This method may be deprecated and removed in the future.
func (a Actor) PreCommitSector(rt Runtime, params *PreCommitSectorParams) *abi.EmptyValue {
	// This is a direct method call to self, not a message send.
	batchParams := &PreCommitSectorBatchParams{Sectors: []miner0.SectorPreCommitInfo{*params}}
	a.PreCommitSectorBatch(rt, batchParams)
	return nil
}

//type PreCommitSectorBatchParams struct {
//	Sectors []miner0.SectorPreCommitInfo
//}
type PreCommitSectorBatchParams = miner5.PreCommitSectorBatchParams

// Pledges the miner to seal and commit some new sectors.
// The caller specifies sector numbers, sealed sector data CIDs, seal randomness epoch, expiration, and the IDs
// of any storage deals contained in the sector data. The storage deal proposals must be already submitted
// to the storage market actor.
// This method calculates the sector's power, locks a pre-commit deposit for the sector, stores information about the
// sector in state and waits for it to be proven or expire.
func (a Actor) PreCommitSectorBatch(rt Runtime, params *PreCommitSectorBatchParams) *abi.EmptyValue {
	currEpoch := rt.CurrEpoch()
	if len(params.Sectors) == 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "batch empty")
	} else if len(params.Sectors) > PreCommitSectorBatchMaxSize {
		rt.Abortf(exitcode.ErrIllegalArgument, "batch of %d too large, max %d", len(params.Sectors), PreCommitSectorBatchMaxSize)
	}

	// Check per-sector preconditions before opening state transaction or sending other messages.
	challengeEarliest := currEpoch - MaxPreCommitRandomnessLookback
	sectorsDeals := make([]market.SectorDeals, len(params.Sectors))
	sectorNumbers := bitfield.New()
	for i, precommit := range params.Sectors {
		// Bitfied.IsSet() is fast when there are only locally-set values.
		set, err := sectorNumbers.IsSet(uint64(precommit.SectorNumber))
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "error checking sector number")
		if set {
			rt.Abortf(exitcode.ErrIllegalArgument, "duplicate sector number %d", precommit.SectorNumber)
		}
		sectorNumbers.Set(uint64(precommit.SectorNumber))

		if !CanPreCommitSealProof(precommit.SealProof) {
			rt.Abortf(exitcode.ErrIllegalArgument, "unsupported seal proof type %v", precommit.SealProof)
		}
		if precommit.SectorNumber > abi.MaxSectorNumber {
			rt.Abortf(exitcode.ErrIllegalArgument, "sector number %d out of range 0..(2^63-1)", precommit.SectorNumber)
		}
		if !precommit.SealedCID.Defined() {
			rt.Abortf(exitcode.ErrIllegalArgument, "sealed CID undefined")
		}
		if precommit.SealedCID.Prefix() != SealedCIDPrefix {
			rt.Abortf(exitcode.ErrIllegalArgument, "sealed CID had wrong prefix")
		}
		if precommit.SealRandEpoch >= currEpoch {
			rt.Abortf(exitcode.ErrIllegalArgument, "seal challenge epoch %v must be before now %v", precommit.SealRandEpoch, rt.CurrEpoch())
		}
		if precommit.SealRandEpoch < challengeEarliest {
			rt.Abortf(exitcode.ErrIllegalArgument, "seal challenge epoch %v too old, must be after %v", precommit.SealRandEpoch, challengeEarliest)
		}

		// Require sector lifetime meets minimum by assuming activation happens at last epoch permitted for seal proof.
		// This could make sector maximum lifetime validation more lenient if the maximum sector limit isn't hit first.
		maxActivation := currEpoch + MaxProveCommitDuration[precommit.SealProof]
		validateExpiration(rt, maxActivation, precommit.Expiration, precommit.SealProof)

		if precommit.ReplaceCapacity {
			rt.Abortf(exitcode.SysErrForbidden, "cc upgrade through precommit discontinued, use lightweight cc upgrade instead")
		}

		sectorsDeals[i] = market.SectorDeals{
			SectorExpiry: precommit.Expiration,
			DealIDs:      precommit.DealIDs,
		}
	}

	// gather information from other actors
	rewardStats := requestCurrentEpochBlockReward(rt)
	pwrTotal := requestCurrentTotalPower(rt)
	dealWeights := requestDealWeights(rt, sectorsDeals)

	if len(dealWeights.Sectors) != len(params.Sectors) {
		rt.Abortf(exitcode.ErrIllegalState, "deal weight request returned %d records, expected %d",
			len(dealWeights.Sectors), len(params.Sectors))
	}

	store := adt.AsStore(rt)
	var st State
	var err error
	feeToBurn := abi.NewTokenAmount(0)
	var needsCron bool
	rt.StateTransaction(&st, func() {
		// Aggregate fee applies only when batching.
		if len(params.Sectors) > 1 {
			aggregateFee := AggregatePreCommitNetworkFee(len(params.Sectors), rt.BaseFee())
			// AggregateFee applied to fee debt to consolidate burn with outstanding debts
			err := st.ApplyPenalty(aggregateFee)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to apply penalty")
		}

		// available balance already accounts for fee debt so it is correct to call
		// this before RepayDebts. We would have to
		// subtract fee debt explicitly if we called this after.
		availableBalance, err := st.GetAvailableBalance(rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to calculate available balance")
		feeToBurn = RepayDebtsOrAbort(rt, &st)

		info := getMinerInfo(rt, &st)
		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		if ConsensusFaultActive(info, currEpoch) {
			rt.Abortf(exitcode.ErrForbidden, "pre-commit not allowed during active consensus fault")
		}

		chainInfos := make([]*SectorPreCommitOnChainInfo, len(params.Sectors))
		totalDepositRequired := big.Zero()
		cleanUpEvents := map[abi.ChainEpoch][]uint64{}
		dealCountMax := SectorDealsMax(info.SectorSize)
		for i, precommit := range params.Sectors {
			// Sector must have the same Window PoSt proof type as the miner's recorded seal type.
			sectorWPoStProof, err := precommit.SealProof.RegisteredWindowPoStProof()
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to lookup Window PoSt proof type for sector seal proof %d", precommit.SealProof)
			if sectorWPoStProof != info.WindowPoStProofType {
				rt.Abortf(exitcode.ErrIllegalArgument, "sector Window PoSt proof type %d must match miner Window PoSt proof type %d (seal proof type %d)",
					sectorWPoStProof, info.WindowPoStProofType, precommit.SealProof)
			}

			if uint64(len(precommit.DealIDs)) > dealCountMax {
				rt.Abortf(exitcode.ErrIllegalArgument, "too many deals for sector %d > %d", len(precommit.DealIDs), dealCountMax)
			}

			// Ensure total deal space does not exceed sector size.
			dealWeight := dealWeights.Sectors[i]
			if dealWeight.DealSpace > uint64(info.SectorSize) {
				rt.Abortf(exitcode.ErrIllegalArgument, "deals too large to fit in sector %d > %d", dealWeight.DealSpace, info.SectorSize)
			}

			// Estimate the sector weight using the current epoch as an estimate for activation,
			// and compute the pre-commit deposit using that weight.
			// The sector's power will be recalculated when it's proven.
			duration := precommit.Expiration - currEpoch
			sectorWeight := QAPowerForWeight(info.SectorSize, duration, dealWeight.DealWeight, dealWeight.VerifiedDealWeight)
			depositReq := PreCommitDepositForPower(rewardStats.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed, sectorWeight)

			// Build on-chain record.
			chainInfos[i] = &SectorPreCommitOnChainInfo{
				Info:               SectorPreCommitInfo(precommit),
				PreCommitDeposit:   depositReq,
				PreCommitEpoch:     currEpoch,
				DealWeight:         dealWeight.DealWeight,
				VerifiedDealWeight: dealWeight.VerifiedDealWeight,
			}
			totalDepositRequired = big.Add(totalDepositRequired, depositReq)

			// Calculate pre-commit cleanup
			msd, ok := MaxProveCommitDuration[precommit.SealProof]
			if !ok {
				rt.Abortf(exitcode.ErrIllegalArgument, "no max seal duration set for proof type: %d", precommit.SealProof)
			}
			// PreCommitCleanUpDelay > 0 here is critical for the batch verification of proofs. Without it, if a proof arrived exactly on the
			// due epoch, ProveCommitSector would accept it, then the expiry event would remove it, and then
			// ConfirmSectorProofsValid would fail to find it.
			cleanUpBound := currEpoch + msd + ExpiredPreCommitCleanUpDelay
			cleanUpEvents[cleanUpBound] = append(cleanUpEvents[cleanUpBound], uint64(precommit.SectorNumber))
		}

		// Batch update actor state.
		if availableBalance.LessThan(totalDepositRequired) {
			rt.Abortf(exitcode.ErrInsufficientFunds, "insufficient funds %v for pre-commit deposit: %v", availableBalance, totalDepositRequired)
		}
		err = st.AddPreCommitDeposit(totalDepositRequired)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add pre-commit deposit %v", totalDepositRequired)

		err = st.AllocateSectorNumbers(store, sectorNumbers, DenyCollisions)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to allocate sector ids %v", sectorNumbers)

		err = st.PutPrecommittedSectors(store, chainInfos...)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to write pre-committed sectors")

		err = st.AddPreCommitCleanUps(store, cleanUpEvents)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add pre-commit expiry to queue")

		// Activate miner cron
		needsCron = !st.DeadlineCronActive
		st.DeadlineCronActive = true
	})

	burnFunds(rt, feeToBurn, BurnMethodPreCommitSectorBatch)
	rt.StateReadonly(&st)
	err = st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")
	if needsCron {
		newDlInfo := st.DeadlineInfo(currEpoch)
		enrollCronEvent(rt, newDlInfo.Last(), &CronEventPayload{
			EventType: CronEventProvingDeadline,
		})
	}

	return nil
}

//type ProveCommitAggregateParams struct {
//	SectorNumbers  bitfield.BitField
//	AggregateProof []byte
//}
type ProveCommitAggregateParams = miner5.ProveCommitAggregateParams

// Checks state of the corresponding sector pre-commitments and verifies aggregate proof of replication
// of these sectors. If valid, the sectors' deals are activated, sectors are assigned a deadline and charged pledge
// and precommit state is removed.
func (a Actor) ProveCommitAggregate(rt Runtime, params *ProveCommitAggregateParams) *abi.EmptyValue {
	aggSectorsCount, err := params.SectorNumbers.Count()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count aggregated sectors")
	if aggSectorsCount > MaxAggregatedSectors {
		rt.Abortf(exitcode.ErrIllegalArgument, "too many sectors addressed, addressed %d want <= %d", aggSectorsCount, MaxAggregatedSectors)
	} else if aggSectorsCount < MinAggregatedSectors {
		rt.Abortf(exitcode.ErrIllegalArgument, "too few sectors addressed, addressed %d want >= %d", aggSectorsCount, MinAggregatedSectors)
	}

	if uint64(len(params.AggregateProof)) > MaxAggregateProofSize {
		rt.Abortf(exitcode.ErrIllegalArgument, "sector prove-commit proof of size %d exceeds max size of %d",
			len(params.AggregateProof), MaxAggregateProofSize)
	}

	store := adt.AsStore(rt)
	var st State
	rt.StateReadonly(&st)

	info := getMinerInfo(rt, &st)
	rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

	precommits, err := st.GetAllPrecommittedSectors(store, params.SectorNumbers)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get precommits")

	// compute data commitments and validate each precommit
	computeDataCommitmentsInputs := make([]*market.SectorDataSpec, len(precommits))
	precommitsToConfirm := []*SectorPreCommitOnChainInfo{}
	for i, precommit := range precommits {
		msd, ok := MaxProveCommitDuration[precommit.Info.SealProof]
		if !ok {
			rt.Abortf(exitcode.ErrIllegalState, "no max seal duration for proof type: %d", precommit.Info.SealProof)
		}
		proveCommitDue := precommit.PreCommitEpoch + msd
		if rt.CurrEpoch() > proveCommitDue {
			rt.Log(rtt.WARN, "skipping commitment for sector %d, too late at %d, due %d", precommit.Info.SectorNumber, rt.CurrEpoch(), proveCommitDue)
		} else {
			precommitsToConfirm = append(precommitsToConfirm, precommit)
		}
		// All sealProof types should match
		if i >= 1 {
			prevSealProof := precommits[i-1].Info.SealProof
			builtin.RequireState(rt, prevSealProof == precommit.Info.SealProof, "aggregate contains mismatched seal proofs %d and %d", prevSealProof, precommit.Info.SealProof)
		}

		computeDataCommitmentsInputs[i] = &market.SectorDataSpec{
			SectorType: precommit.Info.SealProof,
			DealIDs:    precommit.Info.DealIDs,
		}
	}

	// compute shared verification inputs
	commDs := requestUnsealedSectorCIDs(rt, computeDataCommitmentsInputs...)
	svis := make([]proof.AggregateSealVerifyInfo, 0)
	receiver := rt.Receiver()
	minerActorID, err := addr.IDFromAddress(receiver)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "runtime provided non-ID receiver address %s", receiver)
	buf := new(bytes.Buffer)
	err = receiver.MarshalCBOR(buf)
	receiverBytes := buf.Bytes()
	builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to marshal address for seal verification challenge")

	for i, precommit := range precommits {
		interactiveEpoch := precommit.PreCommitEpoch + PreCommitChallengeDelay
		if rt.CurrEpoch() <= interactiveEpoch {
			rt.Abortf(exitcode.ErrForbidden, "too early to prove sector %d", precommit.Info.SectorNumber)
		}

		svInfoRandomness := rt.GetRandomnessFromTickets(crypto.DomainSeparationTag_SealRandomness, precommit.Info.SealRandEpoch, receiverBytes)
		svInfoInteractiveRandomness := rt.GetRandomnessFromBeacon(crypto.DomainSeparationTag_InteractiveSealChallengeSeed, interactiveEpoch, receiverBytes)
		svi := proof.AggregateSealVerifyInfo{
			Number:                precommit.Info.SectorNumber,
			InteractiveRandomness: abi.InteractiveSealRandomness(svInfoInteractiveRandomness),
			Randomness:            abi.SealRandomness(svInfoRandomness),
			SealedCID:             precommit.Info.SealedCID,
			UnsealedCID:           commDs[i],
		}
		svis = append(svis, svi)
	}

	builtin.RequireState(rt, len(precommits) > 0, "bitfield non-empty but zero precommits read from state")
	sealProof := precommits[0].Info.SealProof
	err = rt.VerifyAggregateSeals(
		proof.AggregateSealVerifyProofAndInfos{
			Infos:          svis,
			Proof:          params.AggregateProof,
			Miner:          abi.ActorID(minerActorID),
			SealProof:      sealProof,
			AggregateProof: abi.RegisteredAggregationProof_SnarkPackV1,
		})
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "aggregate seal verify failed")

	rew := requestCurrentEpochBlockReward(rt)
	pwr := requestCurrentTotalPower(rt)

	confirmSectorProofsValid(rt, precommitsToConfirm, rew.ThisEpochBaselinePower, rew.ThisEpochRewardSmoothed, pwr.QualityAdjPowerSmoothed)

	// Compute and burn the aggregate network fee. We need to re-load the state as
	// confirmSectorProofsValid can change it.
	rt.StateReadonly(&st)
	aggregateFee := AggregateProveCommitNetworkFee(len(precommitsToConfirm), rt.BaseFee())
	unlockedBalance, err := st.GetUnlockedBalance(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to determine unlocked balance")
	if unlockedBalance.LessThan(aggregateFee) {
		rt.Abortf(exitcode.ErrInsufficientFunds,
			"remaining unlocked funds after prove-commit (%s) are insufficient to pay aggregation fee of %s",
			unlockedBalance, aggregateFee,
		)
	}
	burnFunds(rt, aggregateFee, BurnMethodProveCommitAggregate)

	err = st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	return nil
}

//type ProveCommitSectorParams struct {
//	SectorNumber abi.SectorNumber
//	ReplicaProof        []byte
//}
type ProveCommitSectorParams = miner0.ProveCommitSectorParams

// Checks state of the corresponding sector pre-commitment, then schedules the proof to be verified in bulk
// by the power actor.
// If valid, the power actor will call ConfirmSectorProofsValid at the end of the same epoch as this message.
func (a Actor) ProveCommitSector(rt Runtime, params *ProveCommitSectorParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerAcceptAny()

	if params.SectorNumber > abi.MaxSectorNumber {
		rt.Abortf(exitcode.ErrIllegalArgument, "sector number greater than maximum")
	}

	store := adt.AsStore(rt)
	sectorNo := params.SectorNumber

	var st State
	rt.StateReadonly(&st)

	precommit, found, err := st.GetPrecommittedSector(store, sectorNo)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pre-committed sector %v", sectorNo)
	if !found {
		rt.Abortf(exitcode.ErrNotFound, "no pre-committed sector %v", sectorNo)
	}

	maxProofSize, err := precommit.Info.SealProof.ProofSize()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to determine max proof size for sector %v", sectorNo)
	if uint64(len(params.Proof)) > maxProofSize {
		rt.Abortf(exitcode.ErrIllegalArgument, "sector prove-commit proof of size %d exceeds max size of %d",
			len(params.Proof), maxProofSize)
	}

	msd, ok := MaxProveCommitDuration[precommit.Info.SealProof]
	if !ok {
		rt.Abortf(exitcode.ErrIllegalState, "no max seal duration for proof type: %d", precommit.Info.SealProof)
	}
	proveCommitDue := precommit.PreCommitEpoch + msd
	if rt.CurrEpoch() > proveCommitDue {
		rt.Abortf(exitcode.ErrIllegalArgument, "commitment proof for %d too late at %d, due %d", sectorNo, rt.CurrEpoch(), proveCommitDue)
	}

	svi := getVerifyInfo(rt, &SealVerifyStuff{
		SealedCID:           precommit.Info.SealedCID,
		InteractiveEpoch:    precommit.PreCommitEpoch + PreCommitChallengeDelay,
		SealRandEpoch:       precommit.Info.SealRandEpoch,
		Proof:               params.Proof,
		DealIDs:             precommit.Info.DealIDs,
		SectorNumber:        precommit.Info.SectorNumber,
		RegisteredSealProof: precommit.Info.SealProof,
	})

	code := rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.MethodsPower.SubmitPoRepForBulkVerify,
		svi,
		abi.NewTokenAmount(0),
		&builtin.Discard{},
	)
	builtin.RequireSuccess(rt, code, "failed to submit proof for bulk verification")
	return nil
}

func (a Actor) ConfirmSectorProofsValid(rt Runtime, params *builtin.ConfirmSectorProofsParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)

	// This should be enforced by the power actor. We log here just in case
	// something goes wrong.
	if len(params.Sectors) > power.MaxMinerProveCommitsPerEpoch {
		rt.Log(rtt.WARN, "confirmed more prove commits in an epoch than permitted: %d > %d",
			len(params.Sectors), power.MaxMinerProveCommitsPerEpoch,
		)
	}

	var st State
	rt.StateReadonly(&st)
	store := adt.AsStore(rt)

	// This skips missing pre-commits.
	precommittedSectors, err := st.FindPrecommittedSectors(store, params.Sectors...)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load pre-committed sectors")

	confirmSectorProofsValid(rt, precommittedSectors, params.RewardBaselinePower, params.RewardSmoothed, params.QualityAdjPowerSmoothed)

	return nil
}

func confirmSectorProofsValid(rt Runtime, preCommits []*SectorPreCommitOnChainInfo, thisEpochBaselinePower big.Int,
	thisEpochRewardSmoothed smoothing.FilterEstimate, qualityAdjPowerSmoothed smoothing.FilterEstimate) {

	circulatingSupply := rt.TotalFilCircSupply()

	// 1. Activate deals, skipping pre-commits with invalid deals.
	//    - calls the market actor.
	// 2. Add new sectors.
	//    - loads and saves sectors.
	//    - loads and saves deadlines/partitions
	//
	// Ideally, we'd combine some of these operations, but at least we have
	// a constant number of them.

	activation := rt.CurrEpoch()
	// Pre-commits for new sectors.
	var validPreCommits []*SectorPreCommitOnChainInfo
	for _, precommit := range preCommits {
		if len(precommit.Info.DealIDs) > 0 {
			// Check (and activate) storage deals associated to sector. Abort if checks failed.
			// TODO: we should batch these calls...
			// https://github.com/filecoin-project/specs-actors/issues/474
			code := rt.Send(
				builtin.StorageMarketActorAddr,
				builtin.MethodsMarket.ActivateDeals,
				&market.ActivateDealsParams{
					DealIDs:      precommit.Info.DealIDs,
					SectorExpiry: precommit.Info.Expiration,
				},
				abi.NewTokenAmount(0),
				&builtin.Discard{},
			)

			if code != exitcode.Ok {
				rt.Log(rtt.INFO, "failed to activate deals on sector %d, dropping from prove commit set", precommit.Info.SectorNumber)
				continue
			}
		}

		validPreCommits = append(validPreCommits, precommit)
	}

	// When all prove commits have failed abort early
	if len(validPreCommits) == 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "all prove commits failed to validate")
	}

	totalPledge := big.Zero()
	depositToUnlock := big.Zero()
	newSectors := make([]*SectorOnChainInfo, 0)
	newlyVested := big.Zero()
	var st State
	store := adt.AsStore(rt)
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)

		newSectorNos := make([]abi.SectorNumber, 0, len(validPreCommits))
		for _, precommit := range validPreCommits {
			// compute initial pledge
			duration := precommit.Info.Expiration - activation
			// This should have been caught in precommit, but don't let other sectors fail because of it.
			if duration < MinSectorExpiration {
				rt.Log(rtt.WARN, "precommit %d has lifetime %d less than minimum. ignoring", precommit.Info.SectorNumber, duration, MinSectorExpiration)
				continue
			}
			pwr := QAPowerForWeight(info.SectorSize, duration, precommit.DealWeight, precommit.VerifiedDealWeight)

			dayReward := ExpectedRewardForPower(thisEpochRewardSmoothed, qualityAdjPowerSmoothed, pwr, builtin.EpochsInDay)
			// The storage pledge is recorded for use in computing the penalty if this sector is terminated
			// before its declared expiration.
			// It's not capped to 1 FIL, so can exceed the actual initial pledge requirement.
			storagePledge := ExpectedRewardForPower(thisEpochRewardSmoothed, qualityAdjPowerSmoothed, pwr, InitialPledgeProjectionPeriod)
			initialPledge := InitialPledgeForPower(pwr, thisEpochBaselinePower, thisEpochRewardSmoothed,
				qualityAdjPowerSmoothed, circulatingSupply)

			newSectorInfo := SectorOnChainInfo{
				SectorNumber:          precommit.Info.SectorNumber,
				SealProof:             precommit.Info.SealProof,
				SealedCID:             precommit.Info.SealedCID,
				DealIDs:               precommit.Info.DealIDs,
				Expiration:            precommit.Info.Expiration,
				Activation:            activation,
				DealWeight:            precommit.DealWeight,
				VerifiedDealWeight:    precommit.VerifiedDealWeight,
				InitialPledge:         initialPledge,
				ExpectedDayReward:     dayReward,
				ExpectedStoragePledge: storagePledge,
				ReplacedSectorAge:     0,          // The replacement mechanism is disabled since v7
				ReplacedDayReward:     big.Zero(), // The replacement mechanism is disabled since v7
			}

			depositToUnlock = big.Add(depositToUnlock, precommit.PreCommitDeposit)
			newSectors = append(newSectors, &newSectorInfo)
			newSectorNos = append(newSectorNos, newSectorInfo.SectorNumber)
			totalPledge = big.Add(totalPledge, initialPledge)
		}

		err := st.PutSectors(store, newSectors...)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to put new sectors")

		err = st.DeletePrecommittedSectors(store, newSectorNos...)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete precommited sectors")

		err = st.AssignSectorsToDeadlines(store, rt.CurrEpoch(), newSectors, info.WindowPoStPartitionSectors, info.SectorSize)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to assign new sectors to deadlines")

		// Unlock deposit for successful proofs, make it available for lock-up as initial pledge.
		err = st.AddPreCommitDeposit(depositToUnlock.Neg())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add pre-commit deposit %v", depositToUnlock.Neg())

		unlockedBalance, err := st.GetUnlockedBalance(rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to calculate unlocked balance")
		if unlockedBalance.LessThan(totalPledge) {
			rt.Abortf(exitcode.ErrInsufficientFunds, "insufficient funds for aggregate initial pledge requirement %s, available: %s", totalPledge, unlockedBalance)
		}

		err = st.AddInitialPledge(totalPledge)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add initial pledge %v", totalPledge)
		err = st.CheckBalanceInvariants(rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")
	})

	// Request pledge update for activated sector.
	notifyPledgeChanged(rt, big.Sub(totalPledge, newlyVested))
}

//type CheckSectorProvenParams struct {
//	SectorNumber abi.SectorNumber
//}
type CheckSectorProvenParams = miner0.CheckSectorProvenParams

func (a Actor) CheckSectorProven(rt Runtime, params *CheckSectorProvenParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerAcceptAny()

	if params.SectorNumber > abi.MaxSectorNumber {
		rt.Abortf(exitcode.ErrIllegalArgument, "sector number out of range")
	}

	var st State
	rt.StateReadonly(&st)
	store := adt.AsStore(rt)
	sectorNo := params.SectorNumber

	if _, found, err := st.GetSector(store, sectorNo); err != nil {
		rt.Abortf(exitcode.ErrIllegalState, "failed to load proven sector %v", sectorNo)
	} else if !found {
		rt.Abortf(exitcode.ErrNotFound, "sector %v not proven", sectorNo)
	}
	return nil
}

/////////////////////////
// Sector Modification //
/////////////////////////

//type ExtendSectorExpirationParams struct {
//	Extensions []ExpirationExtension
//}
type ExtendSectorExpirationParams = miner0.ExtendSectorExpirationParams

//type ExpirationExtension struct {
//	Deadline      uint64
//	Partition     uint64
//	Sectors       bitfield.BitField
//	NewExpiration abi.ChainEpoch
//}
type ExpirationExtension = miner0.ExpirationExtension

// Changes the expiration epoch for a sector to a new, later one.
// The sector must not be terminated or faulty.
// The sector's power is recomputed for the new expiration.
func (a Actor) ExtendSectorExpiration(rt Runtime, params *ExtendSectorExpirationParams) *abi.EmptyValue {
	if uint64(len(params.Extensions)) > DeclarationsMax {
		rt.Abortf(exitcode.ErrIllegalArgument, "too many declarations %d, max %d", len(params.Extensions), DeclarationsMax)
	}

	// limit the number of sectors declared at once
	// https://github.com/filecoin-project/specs-actors/issues/416
	var sectorCount uint64
	for _, decl := range params.Extensions {
		if decl.Deadline >= WPoStPeriodDeadlines {
			rt.Abortf(exitcode.ErrIllegalArgument, "deadline %d not in range 0..%d", decl.Deadline, WPoStPeriodDeadlines)
		}
		count, err := decl.Sectors.Count()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
			"failed to count sectors for deadline %d, partition %d",
			decl.Deadline, decl.Partition,
		)
		if sectorCount > math.MaxUint64-count {
			rt.Abortf(exitcode.ErrIllegalArgument, "sector bitfield integer overflow")
		}
		sectorCount += count
	}
	if sectorCount > AddressedSectorsMax {
		rt.Abortf(exitcode.ErrIllegalArgument,
			"too many sectors for declaration %d, max %d",
			sectorCount, AddressedSectorsMax,
		)
	}

	currEpoch := rt.CurrEpoch()

	powerDelta := NewPowerPairZero()
	pledgeDelta := big.Zero()
	store := adt.AsStore(rt)
	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)

		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

		// Group declarations by deadline, and remember iteration order.
		// This should be merged with the iteration outside the state transaction.
		declsByDeadline := map[uint64][]*ExpirationExtension{}
		var deadlinesToLoad []uint64
		for i := range params.Extensions {
			// Take a pointer to the value inside the slice, don't
			// take a reference to the temporary loop variable as it
			// will be overwritten every iteration.
			decl := &params.Extensions[i]
			if _, ok := declsByDeadline[decl.Deadline]; !ok {
				deadlinesToLoad = append(deadlinesToLoad, decl.Deadline)
			}
			declsByDeadline[decl.Deadline] = append(declsByDeadline[decl.Deadline], decl)
		}

		sectors, err := LoadSectors(store, st.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")

		for _, dlIdx := range deadlinesToLoad {
			deadline, err := deadlines.LoadDeadline(store, dlIdx)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)

			partitions, err := deadline.PartitionsArray(store)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load partitions for deadline %d", dlIdx)

			quant := st.QuantSpecForDeadline(dlIdx)

			// Group modified partitions by epoch to which they are extended. Duplicates are ok.
			partitionsByNewEpoch := map[abi.ChainEpoch][]uint64{}
			// Remember iteration order of epochs.
			var epochsToReschedule []abi.ChainEpoch

			for _, decl := range declsByDeadline[dlIdx] {
				var partition Partition
				found, err := partitions.Get(decl.Partition, &partition)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %v partition %v", dlIdx, decl.Partition)
				if !found {
					rt.Abortf(exitcode.ErrNotFound, "no such deadline %v partition %v", dlIdx, decl.Partition)
				}

				oldSectors, err := sectors.Load(decl.Sectors)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors in deadline %v partition %v", dlIdx, decl.Partition)
				newSectors := make([]*SectorOnChainInfo, len(oldSectors))
				for i, sector := range oldSectors {
					if !CanExtendSealProofType(sector.SealProof) {
						rt.Abortf(exitcode.ErrForbidden, "cannot extend expiration for sector %v with unsupported seal type %v",
							sector.SectorNumber, sector.SealProof)
					}
					// This can happen if the sector should have already expired, but hasn't
					// because the end of its deadline hasn't passed yet.
					if sector.Expiration < currEpoch {
						rt.Abortf(exitcode.ErrForbidden, "cannot extend expiration for expired sector %v, expired at %d, now %d",
							sector.SectorNumber,
							sector.Expiration,
							currEpoch,
						)
					}
					if decl.NewExpiration < sector.Expiration {
						rt.Abortf(exitcode.ErrIllegalArgument, "cannot reduce sector %v's expiration to %d from %d",
							sector.SectorNumber, decl.NewExpiration, sector.Expiration)
					}
					validateExpiration(rt, sector.Activation, decl.NewExpiration, sector.SealProof)

					// Remove "spent" deal weights
					newDealWeight := big.Div(
						big.Mul(sector.DealWeight, big.NewInt(int64(sector.Expiration-currEpoch))),
						big.NewInt(int64(sector.Expiration-sector.Activation)),
					)
					newVerifiedDealWeight := big.Div(
						big.Mul(sector.VerifiedDealWeight, big.NewInt(int64(sector.Expiration-currEpoch))),
						big.NewInt(int64(sector.Expiration-sector.Activation)),
					)

					newSector := *sector
					newSector.Expiration = decl.NewExpiration
					newSector.DealWeight = newDealWeight
					newSector.VerifiedDealWeight = newVerifiedDealWeight

					newSectors[i] = &newSector
				}

				// Overwrite sector infos.
				err = sectors.Store(newSectors...)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update sectors %v", decl.Sectors)

				// Remove old sectors from partition and assign new sectors.
				partitionPowerDelta, partitionPledgeDelta, err := partition.ReplaceSectors(store, oldSectors, newSectors, info.SectorSize, quant)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to replace sector expirations at deadline %v partition %v", dlIdx, decl.Partition)

				powerDelta = powerDelta.Add(partitionPowerDelta)
				pledgeDelta = big.Add(pledgeDelta, partitionPledgeDelta) // expected to be zero, see note below.

				err = partitions.Set(decl.Partition, &partition)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadline %v partition %v", dlIdx, decl.Partition)

				// Record the new partition expiration epoch for setting outside this loop over declarations.
				prevEpochPartitions, ok := partitionsByNewEpoch[decl.NewExpiration]
				partitionsByNewEpoch[decl.NewExpiration] = append(prevEpochPartitions, decl.Partition)
				if !ok {
					epochsToReschedule = append(epochsToReschedule, decl.NewExpiration)
				}
			}

			deadline.Partitions, err = partitions.Root()
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save partitions for deadline %d", dlIdx)

			// Record partitions in deadline expiration queue
			for _, epoch := range epochsToReschedule {
				pIdxs := partitionsByNewEpoch[epoch]
				err := deadline.AddExpirationPartitions(store, epoch, pIdxs, quant)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add expiration partitions to deadline %v epoch %v: %v",
					dlIdx, epoch, pIdxs)
			}

			err = deadlines.UpdateDeadline(store, dlIdx, deadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadline %d", dlIdx)
		}

		st.Sectors, err = sectors.Root()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save sectors")

		err = st.SaveDeadlines(store, deadlines)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
	})

	requestUpdatePower(rt, powerDelta)
	// Note: the pledge delta is expected to be zero, since pledge is not re-calculated for the extension.
	// But in case that ever changes, we can do the right thing here.
	notifyPledgeChanged(rt, pledgeDelta)
	return nil
}

//type TerminateSectorsParams struct {
//	Terminations []TerminationDeclaration
//}
type TerminateSectorsParams = miner0.TerminateSectorsParams

//type TerminationDeclaration struct {
//	Deadline  uint64
//	Partition uint64
//	Sectors   bitfield.BitField
//}
type TerminationDeclaration = miner0.TerminationDeclaration

//type TerminateSectorsReturn struct {
//	// Set to true if all early termination work has been completed. When
//	// false, the miner may choose to repeatedly invoke TerminateSectors
//	// with no new sectors to process the remainder of the pending
//	// terminations. While pending terminations are outstanding, the miner
//	// will not be able to withdraw funds.
//	Done bool
//}
type TerminateSectorsReturn = miner0.TerminateSectorsReturn

// Marks some sectors as terminated at the present epoch, earlier than their
// scheduled termination, and adds these sectors to the early termination queue.
// This method then processes up to AddressedSectorsMax sectors and
// AddressedPartitionsMax partitions from the early termination queue,
// terminating deals, paying fines, and returning pledge collateral. While
// sectors remain in this queue:
//
//  1. The miner will be unable to withdraw funds.
//  2. The chain will process up to AddressedSectorsMax sectors and
//     AddressedPartitionsMax per epoch until the queue is empty.
//
// The sectors are immediately ignored for Window PoSt proofs, and should be
// masked in the same way as faulty sectors. A miner may not terminate sectors in the
// current deadline or the next deadline to be proven.
//
// This function may be invoked with no new sectors to explicitly process the
// next batch of sectors.
func (a Actor) TerminateSectors(rt Runtime, params *TerminateSectorsParams) *TerminateSectorsReturn {
	// Note: this cannot terminate pre-committed but un-proven sectors.
	// They must be allowed to expire (and deposit burnt).

	if len(params.Terminations) > DeclarationsMax {
		rt.Abortf(exitcode.ErrIllegalArgument,
			"too many declarations when terminating sectors: %d > %d",
			len(params.Terminations), DeclarationsMax,
		)
	}

	toProcess := make(DeadlineSectorMap)
	for _, term := range params.Terminations {
		err := toProcess.Add(term.Deadline, term.Partition, term.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
			"failed to process deadline %d, partition %d", term.Deadline, term.Partition,
		)
	}
	err := toProcess.Check(AddressedPartitionsMax, AddressedSectorsMax)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "cannot process requested parameters")

	var hadEarlyTerminations bool
	var st State
	store := adt.AsStore(rt)
	currEpoch := rt.CurrEpoch()
	powerDelta := NewPowerPairZero()
	rt.StateTransaction(&st, func() {
		hadEarlyTerminations = havePendingEarlyTerminations(rt, &st)

		info := getMinerInfo(rt, &st)
		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

		// We're only reading the sectors, so there's no need to save this back.
		// However, we still want to avoid re-loading this array per-partition.
		sectors, err := LoadSectors(store, st.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors")

		err = toProcess.ForEach(func(dlIdx uint64, partitionSectors PartitionSectorMap) error {
			// If the deadline is the current or next deadline to prove, don't allow terminating sectors.
			// We assume that deadlines are immutable when being proven.
			if !deadlineIsMutable(st.CurrentProvingPeriodStart(currEpoch), dlIdx, currEpoch) {
				rt.Abortf(exitcode.ErrIllegalArgument, "cannot terminate sectors in immutable deadline %d", dlIdx)
			}

			quant := st.QuantSpecForDeadline(dlIdx)

			deadline, err := deadlines.LoadDeadline(store, dlIdx)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)

			removedPower, err := deadline.TerminateSectors(store, sectors, currEpoch, partitionSectors, info.SectorSize, quant)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to terminate sectors in deadline %d", dlIdx)

			st.EarlyTerminations.Set(dlIdx)

			powerDelta = powerDelta.Sub(removedPower)

			err = deadlines.UpdateDeadline(store, dlIdx, deadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update deadline %d", dlIdx)

			return nil
		})
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to walk sectors")

		err = st.SaveDeadlines(store, deadlines)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
	})

	epochReward := requestCurrentEpochBlockReward(rt)
	pwrTotal := requestCurrentTotalPower(rt)

	// Now, try to process these sectors.
	more := processEarlyTerminations(rt, epochReward.ThisEpochRewardSmoothed, pwrTotal.QualityAdjPowerSmoothed)
	if more && !hadEarlyTerminations {
		// We have remaining terminations, and we didn't _previously_
		// have early terminations to process, schedule a cron job.
		// NOTE: This isn't quite correct. If we repeatedly fill, empty,
		// fill, and empty, the queue, we'll keep scheduling new cron
		// jobs. However, in practice, that shouldn't be all that bad.
		scheduleEarlyTerminationWork(rt)
	}

	rt.StateReadonly(&st)
	err = st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	requestUpdatePower(rt, powerDelta)
	return &TerminateSectorsReturn{Done: !more}
}

////////////
// Faults //
////////////

//type DeclareFaultsParams struct {
//	Faults []FaultDeclaration
//}
type DeclareFaultsParams = miner0.DeclareFaultsParams

//type FaultDeclaration struct {
//	// The deadline to which the faulty sectors are assigned, in range [0..WPoStPeriodDeadlines)
//	Deadline uint64
//	// Partition index within the deadline containing the faulty sectors.
//	Partition uint64
//	// Sectors in the partition being declared faulty.
//	Sectors bitfield.BitField
//}
type FaultDeclaration = miner0.FaultDeclaration

func (a Actor) DeclareFaults(rt Runtime, params *DeclareFaultsParams) *abi.EmptyValue {
	if len(params.Faults) > DeclarationsMax {
		rt.Abortf(exitcode.ErrIllegalArgument,
			"too many fault declarations for a single message: %d > %d",
			len(params.Faults), DeclarationsMax,
		)
	}

	toProcess := make(DeadlineSectorMap)
	for _, term := range params.Faults {
		err := toProcess.Add(term.Deadline, term.Partition, term.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
			"failed to process deadline %d, partition %d", term.Deadline, term.Partition,
		)
	}
	err := toProcess.Check(AddressedPartitionsMax, AddressedSectorsMax)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "cannot process requested parameters")

	store := adt.AsStore(rt)
	var st State
	powerDelta := NewPowerPairZero()
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)
		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		deadlines, err := st.LoadDeadlines(store)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

		sectors, err := LoadSectors(store, st.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")

		currEpoch := rt.CurrEpoch()
		err = toProcess.ForEach(func(dlIdx uint64, pm PartitionSectorMap) error {
			targetDeadline, err := declarationDeadlineInfo(st.CurrentProvingPeriodStart(currEpoch), dlIdx, currEpoch)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "invalid fault declaration deadline %d", dlIdx)

			err = validateFRDeclarationDeadline(targetDeadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed fault declaration at deadline %d", dlIdx)

			deadline, err := deadlines.LoadDeadline(store, dlIdx)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)

			faultExpirationEpoch := targetDeadline.Last() + FaultMaxAge
			deadlinePowerDelta, err := deadline.RecordFaults(store, sectors, info.SectorSize, QuantSpecForDeadline(targetDeadline), faultExpirationEpoch, pm)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to declare faults for deadline %d", dlIdx)

			err = deadlines.UpdateDeadline(store, dlIdx, deadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store deadline %d partitions", dlIdx)

			powerDelta = powerDelta.Add(deadlinePowerDelta)
			return nil
		})
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to iterate deadlines")

		err = st.SaveDeadlines(store, deadlines)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
	})

	// Remove power for new faulty sectors.
	// NOTE: It would be permissible to delay the power loss until the deadline closes, but that would require
	// additional accounting state.
	// https://github.com/filecoin-project/specs-actors/issues/414
	requestUpdatePower(rt, powerDelta)

	// Payment of penalty for declared faults is deferred to the deadline cron.
	return nil
}

//type DeclareFaultsRecoveredParams struct {
//	Recoveries []RecoveryDeclaration
//}
type DeclareFaultsRecoveredParams = miner0.DeclareFaultsRecoveredParams

//type RecoveryDeclaration struct {
//	// The deadline to which the recovered sectors are assigned, in range [0..WPoStPeriodDeadlines)
//	Deadline uint64
//	// Partition index within the deadline containing the recovered sectors.
//	Partition uint64
//	// Sectors in the partition being declared recovered.
//	Sectors bitfield.BitField
//}
type RecoveryDeclaration = miner0.RecoveryDeclaration

func (a Actor) DeclareFaultsRecovered(rt Runtime, params *DeclareFaultsRecoveredParams) *abi.EmptyValue {
	if len(params.Recoveries) > DeclarationsMax {
		rt.Abortf(exitcode.ErrIllegalArgument,
			"too many recovery declarations for a single message: %d > %d",
			len(params.Recoveries), DeclarationsMax,
		)
	}

	toProcess := make(DeadlineSectorMap)
	for _, term := range params.Recoveries {
		err := toProcess.Add(term.Deadline, term.Partition, term.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument,
			"failed to process deadline %d, partition %d", term.Deadline, term.Partition,
		)
	}
	err := toProcess.Check(AddressedPartitionsMax, AddressedSectorsMax)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "cannot process requested parameters")

	store := adt.AsStore(rt)
	var st State
	feeToBurn := abi.NewTokenAmount(0)
	rt.StateTransaction(&st, func() {
		// Verify unlocked funds cover both InitialPledgeRequirement and FeeDebt
		// and repay fee debt now.
		feeToBurn = RepayDebtsOrAbort(rt, &st)

		info := getMinerInfo(rt, &st)
		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)
		if ConsensusFaultActive(info, rt.CurrEpoch()) {
			rt.Abortf(exitcode.ErrForbidden, "recovery not allowed during active consensus fault")
		}

		deadlines, err := st.LoadDeadlines(adt.AsStore(rt))
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

		sectors, err := LoadSectors(store, st.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")

		currEpoch := rt.CurrEpoch()
		err = toProcess.ForEach(func(dlIdx uint64, pm PartitionSectorMap) error {
			targetDeadline, err := declarationDeadlineInfo(st.CurrentProvingPeriodStart(currEpoch), dlIdx, currEpoch)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "invalid recovery declaration deadline %d", dlIdx)
			err = validateFRDeclarationDeadline(targetDeadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed recovery declaration at deadline %d", dlIdx)

			deadline, err := deadlines.LoadDeadline(store, dlIdx)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)

			err = deadline.DeclareFaultsRecovered(store, sectors, info.SectorSize, pm)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to declare recoveries for deadline %d", dlIdx)

			err = deadlines.UpdateDeadline(store, dlIdx, deadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store deadline %d", dlIdx)
			return nil
		})
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to walk sectors")

		err = st.SaveDeadlines(store, deadlines)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
	})

	burnFunds(rt, feeToBurn, BurnMethodDeclareFaultsRecovered)
	rt.StateReadonly(&st)
	err = st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	// Power is not restored yet, but when the recovered sectors are successfully PoSted.
	return nil
}

/////////////////
// Maintenance //
/////////////////

//type CompactPartitionsParams struct {
//	Deadline   uint64
//	Partitions bitfield.BitField
//}
type CompactPartitionsParams = miner0.CompactPartitionsParams

// Compacts a number of partitions at one deadline by removing terminated sectors, re-ordering the remaining sectors,
// and assigning them to new partitions so as to completely fill all but one partition with live sectors.
// The addressed partitions are removed from the deadline, and new ones appended.
// The final partition in the deadline is always included in the compaction, whether or not explicitly requested.
// Removed sectors are removed from state entirely.
// May not be invoked if the deadline has any un-processed early terminations.
func (a Actor) CompactPartitions(rt Runtime, params *CompactPartitionsParams) *abi.EmptyValue {
	if params.Deadline >= WPoStPeriodDeadlines {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid deadline %v", params.Deadline)
	}

	partitionCount, err := params.Partitions.Count()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to parse partitions bitfield")

	store := adt.AsStore(rt)
	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)
		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		if !deadlineAvailableForCompaction(st.CurrentProvingPeriodStart(rt.CurrEpoch()), params.Deadline, rt.CurrEpoch()) {
			rt.Abortf(exitcode.ErrForbidden,
				"cannot compact deadline %d during its challenge window, or the prior challenge window, or before %d epochs have passed since its last challenge window ended", params.Deadline, WPoStDisputeWindow)
		}

		submissionPartitionLimit := loadPartitionsSectorsMax(info.WindowPoStPartitionSectors)
		if partitionCount > submissionPartitionLimit {
			rt.Abortf(exitcode.ErrIllegalArgument, "too many partitions %d, limit %d", partitionCount, submissionPartitionLimit)
		}

		quant := st.QuantSpecForDeadline(params.Deadline)

		deadlines, err := st.LoadDeadlines(store)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

		deadline, err := deadlines.LoadDeadline(store, params.Deadline)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", params.Deadline)

		live, dead, removedPower, err := deadline.RemovePartitions(store, params.Partitions, quant)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to remove partitions from deadline %d", params.Deadline)

		err = st.DeleteSectors(store, dead)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete dead sectors")

		sectors, err := st.LoadSectorInfos(store, live)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load moved sectors")

		proven := true
		addedPower, err := deadline.AddSectors(store, info.WindowPoStPartitionSectors, proven, sectors, info.SectorSize, quant)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add back moved sectors")

		if !removedPower.Equals(addedPower) {
			rt.Abortf(exitcode.ErrIllegalState, "power changed when compacting partitions: was %v, is now %v", removedPower, addedPower)
		}
		err = deadlines.UpdateDeadline(store, params.Deadline, deadline)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update deadline %d", params.Deadline)

		err = st.SaveDeadlines(store, deadlines)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")
	})
	return nil
}

//type CompactSectorNumbersParams struct {
//	MaskSectorNumbers bitfield.BitField
//}
type CompactSectorNumbersParams = miner0.CompactSectorNumbersParams

// Compacts sector number allocations to reduce the size of the allocated sector
// number bitfield.
//
// When allocating sector numbers sequentially, or in sequential groups, this
// bitfield should remain fairly small. However, if the bitfield grows large
// enough such that PreCommitSector fails (or becomes expensive), this method
// can be called to mask out (throw away) entire ranges of unused sector IDs.
// For example, if sectors 1-99 and 101-200 have been allocated, sector number
// 99 can be masked out to collapse these two ranges into one.
func (a Actor) CompactSectorNumbers(rt Runtime, params *CompactSectorNumbersParams) *abi.EmptyValue {
	lastSectorNo, err := params.MaskSectorNumbers.Last()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "invalid mask bitfield")
	if lastSectorNo > abi.MaxSectorNumber {
		rt.Abortf(exitcode.ErrIllegalArgument, "masked sector number %d exceeded max sector number", lastSectorNo)
	}

	store := adt.AsStore(rt)
	var st State
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)
		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		err := st.AllocateSectorNumbers(store, params.MaskSectorNumbers, AllowCollisions)

		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to mask sector numbers")
	})
	return nil
}

///////////////////////
// Pledge Collateral //
///////////////////////

// Locks up some amount of the miner's unlocked balance (including funds received alongside the invoking message).
func (a Actor) ApplyRewards(rt Runtime, params *builtin.ApplyRewardParams) *abi.EmptyValue {
	if params.Reward.Sign() < 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "cannot lock up a negative amount of funds")
	}
	if params.Penalty.Sign() < 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "cannot penalize a negative amount of funds")
	}

	var st State
	pledgeDeltaTotal := big.Zero()
	toBurn := big.Zero()
	rt.StateTransaction(&st, func() {
		var err error
		store := adt.AsStore(rt)
		rt.ValidateImmediateCallerIs(builtin.RewardActorAddr)

		rewardToLock, lockedRewardVestingSpec := LockedRewardFromReward(params.Reward)

		// This ensures the miner has sufficient funds to lock up amountToLock.
		// This should always be true if reward actor sends reward funds with the message.
		unlockedBalance, err := st.GetUnlockedBalance(rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to calculate unlocked balance")
		if unlockedBalance.LessThan(rewardToLock) {
			rt.Abortf(exitcode.ErrInsufficientFunds, "insufficient funds to lock, available: %v, requested: %v", unlockedBalance, rewardToLock)
		}

		newlyVested, err := st.AddLockedFunds(store, rt.CurrEpoch(), rewardToLock, lockedRewardVestingSpec)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to lock funds in vesting table")
		pledgeDeltaTotal = big.Sub(pledgeDeltaTotal, newlyVested)
		pledgeDeltaTotal = big.Add(pledgeDeltaTotal, rewardToLock)

		// If the miner incurred block mining penalties charge these to miner's fee debt
		err = st.ApplyPenalty(params.Penalty)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to apply penalty")
		// Attempt to repay all fee debt in this call. In most cases the miner will have enough
		// funds in the *reward alone* to cover the penalty. In the rare case a miner incurs more
		// penalty than it can pay for with reward and existing funds, it will go into fee debt.
		penaltyFromVesting, penaltyFromBalance, err := st.RepayPartialDebtInPriorityOrder(store, rt.CurrEpoch(), rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to repay penalty")
		pledgeDeltaTotal = big.Sub(pledgeDeltaTotal, penaltyFromVesting)
		toBurn = big.Add(penaltyFromVesting, penaltyFromBalance)
	})

	notifyPledgeChanged(rt, pledgeDeltaTotal)
	burnFunds(rt, toBurn, BurnMethodApplyRewards)
	rt.StateReadonly(&st)
	err := st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	return nil
}

//type ReportConsensusFaultParams struct {
//	BlockHeader1     []byte
//	BlockHeader2     []byte
//	BlockHeaderExtra []byte
//}
type ReportConsensusFaultParams = miner0.ReportConsensusFaultParams

func (a Actor) ReportConsensusFault(rt Runtime, params *ReportConsensusFaultParams) *abi.EmptyValue {
	// Note: only the first report of any fault is processed because it sets the
	// ConsensusFaultElapsed state variable to an epoch after the fault, and reports prior to
	// that epoch are no longer valid.
	rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
	reporter := rt.Caller()

	fault, err := rt.VerifyConsensusFault(params.BlockHeader1, params.BlockHeader2, params.BlockHeaderExtra)
	if err != nil {
		rt.Abortf(exitcode.ErrIllegalArgument, "fault not verified: %s", err)
	}
	if fault.Target != rt.Receiver() {
		rt.Abortf(exitcode.ErrIllegalArgument, "fault by %v reported to miner %v", fault.Target, rt.Receiver())
	}

	// Elapsed since the fault (i.e. since the higher of the two blocks)
	currEpoch := rt.CurrEpoch()
	faultAge := currEpoch - fault.Epoch
	if faultAge <= 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid fault epoch %v ahead of current %v", fault.Epoch, currEpoch)
	}

	// Penalize miner consensus fault fee
	// Give a portion of this to the reporter as reward
	var st State
	rewardStats := requestCurrentEpochBlockReward(rt)
	// The policy amounts we should burn and send to reporter
	// These may differ from actual funds send when miner goes into fee debt
	thisEpochReward := smoothing.Estimate(&rewardStats.ThisEpochRewardSmoothed)
	faultPenalty := ConsensusFaultPenalty(thisEpochReward)
	slasherReward := RewardForConsensusSlashReport(thisEpochReward)
	pledgeDelta := big.Zero()

	// The amounts actually sent to burnt funds and reporter
	burnAmount := big.Zero()
	rewardAmount := big.Zero()
	rt.StateTransaction(&st, func() {
		info := getMinerInfo(rt, &st)

		// verify miner hasn't already been faulted
		if fault.Epoch < info.ConsensusFaultElapsed {
			rt.Abortf(exitcode.ErrForbidden, "fault epoch %d is too old, last exclusion period ended at %d", fault.Epoch, info.ConsensusFaultElapsed)
		}

		err := st.ApplyPenalty(faultPenalty)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to apply penalty")

		// Pay penalty
		penaltyFromVesting, penaltyFromBalance, err := st.RepayPartialDebtInPriorityOrder(adt.AsStore(rt), currEpoch, rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to pay fees")
		// Burn the amount actually payable. Any difference in this and faultPenalty already recorded as FeeDebt
		burnAmount = big.Add(penaltyFromVesting, penaltyFromBalance)
		pledgeDelta = big.Add(pledgeDelta, penaltyFromVesting.Neg())

		// clamp reward at funds burnt
		rewardAmount = big.Min(burnAmount, slasherReward)
		// reduce burnAmount by rewardAmount
		burnAmount = big.Sub(burnAmount, rewardAmount)
		info.ConsensusFaultElapsed = currEpoch + ConsensusFaultIneligibilityDuration
		err = st.SaveInfo(adt.AsStore(rt), info)
		builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to save miner info")
	})
	code := rt.Send(reporter, builtin.MethodSend, nil, rewardAmount, &builtin.Discard{})
	if !code.IsSuccess() {
		rt.Log(rtt.ERROR, "failed to send reward")
	}
	burnFunds(rt, burnAmount, BurnMethodReportConsensusFault)
	notifyPledgeChanged(rt, pledgeDelta)

	rt.StateReadonly(&st)
	err = st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	return nil
}

//type WithdrawBalanceParams struct {
//	AmountRequested abi.TokenAmount
//}
type WithdrawBalanceParams = miner0.WithdrawBalanceParams

// Attempt to withdraw the specified amount from the miner's available balance.
// Only owner key has permission to withdraw.
// If less than the specified amount is available, yields the entire available balance.
// Returns the amount withdrawn.
func (a Actor) WithdrawBalance(rt Runtime, params *WithdrawBalanceParams) *abi.TokenAmount {
	var st State
	if params.AmountRequested.LessThan(big.Zero()) {
		rt.Abortf(exitcode.ErrIllegalArgument, "negative fund requested for withdrawal: %s", params.AmountRequested)
	}
	var info *MinerInfo
	newlyVested := big.Zero()
	feeToBurn := big.Zero()
	availableBalance := big.Zero()
	rt.StateTransaction(&st, func() {
		var err error
		info = getMinerInfo(rt, &st)
		// Only the owner is allowed to withdraw the balance as it belongs to/is controlled by the owner
		// and not the worker.
		rt.ValidateImmediateCallerIs(info.Owner)

		// Ensure we don't have any pending terminations.
		if count, err := st.EarlyTerminations.Count(); err != nil {
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count early terminations")
		} else if count > 0 {
			rt.Abortf(exitcode.ErrForbidden,
				"cannot withdraw funds while %d deadlines have terminated sectors with outstanding fees",
				count,
			)
		}

		// Unlock vested funds so we can spend them.
		newlyVested, err = st.UnlockVestedFunds(adt.AsStore(rt), rt.CurrEpoch())
		if err != nil {
			rt.Abortf(exitcode.ErrIllegalState, "failed to vest fund: %v", err)
		}
		// available balance already accounts for fee debt so it is correct to call
		// this before RepayDebts. We would have to
		// subtract fee debt explicitly if we called this after.
		availableBalance, err = st.GetAvailableBalance(rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to calculate available balance")

		// Verify unlocked funds cover both InitialPledgeRequirement and FeeDebt
		// and repay fee debt now.
		feeToBurn = RepayDebtsOrAbort(rt, &st)
	})

	amountWithdrawn := big.Min(availableBalance, params.AmountRequested)
	builtin.RequireState(rt, amountWithdrawn.GreaterThanEqual(big.Zero()), "negative amount to withdraw: %v", amountWithdrawn)
	builtin.RequireState(rt, amountWithdrawn.LessThanEqual(availableBalance), "amount to withdraw %v < available %v", amountWithdrawn, availableBalance)

	if amountWithdrawn.GreaterThan(abi.NewTokenAmount(0)) {
		code := rt.Send(info.Owner, builtin.MethodSend, nil, amountWithdrawn, &builtin.Discard{})
		builtin.RequireSuccess(rt, code, "failed to withdraw balance")
	}

	burnFunds(rt, feeToBurn, BurnMethodWithdrawBalance)

	pledgeDelta := newlyVested.Neg()
	notifyPledgeChanged(rt, pledgeDelta)

	err := st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	return &amountWithdrawn
}

func (a Actor) RepayDebt(rt Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
	var st State
	var fromVesting, fromBalance abi.TokenAmount
	rt.StateTransaction(&st, func() {
		var err error
		info := getMinerInfo(rt, &st)
		rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

		// Repay as much fee debt as possible.
		fromVesting, fromBalance, err = st.RepayPartialDebtInPriorityOrder(adt.AsStore(rt), rt.CurrEpoch(), rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock fee debt")
	})

	notifyPledgeChanged(rt, fromVesting.Neg())
	burnFunds(rt, big.Sum(fromVesting, fromBalance), BurnMethodRepayDebt)
	err := st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")

	return nil
}

type ReplicaUpdate = miner7.ReplicaUpdate

type ProveReplicaUpdatesParams = miner7.ProveReplicaUpdatesParams

func (a Actor) ProveReplicaUpdates(rt Runtime, params *ProveReplicaUpdatesParams) *bitfield.BitField {
	// Validate inputs

	builtin.RequireParam(rt, len(params.Updates) <= ProveReplicaUpdatesMaxSize, "too many updates (%d > %d)", len(params.Updates), ProveReplicaUpdatesMaxSize)

	store := adt.AsStore(rt)
	var stReadOnly State
	rt.StateReadonly(&stReadOnly)
	info := getMinerInfo(rt, &stReadOnly)

	rt.ValidateImmediateCallerIs(append(info.ControlAddresses, info.Owner, info.Worker)...)

	sectors, err := LoadSectors(store, stReadOnly.Sectors)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")

	powerDelta := NewPowerPairZero()
	pledgeDelta := big.Zero()

	type updateAndSectorInfo struct {
		update     *ReplicaUpdate
		sectorInfo *SectorOnChainInfo
	}

	var sectorsDeals []market.SectorDeals
	var sectorsDataSpec []*market.SectorDataSpec
	var validatedUpdates []*updateAndSectorInfo
	sectorNumbers := bitfield.New()
	for i := range params.Updates {
		update := params.Updates[i]
		// Bitfied.IsSet() is fast when there are only locally-set values.
		set, err := sectorNumbers.IsSet(uint64(update.SectorID))
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "error checking sector number")
		if set {
			rt.Log(rtt.INFO, "duplicate sector being updated %d, skipping", update.SectorID)
			continue
		}

		sectorNumbers.Set(uint64(update.SectorID))

		if len(update.ReplicaProof) > 4096 {
			rt.Log(rtt.INFO, "update proof is too large (%d), skipping sector %d", len(update.ReplicaProof), update.SectorID)
			continue
		}

		if len(update.Deals) <= 0 {
			rt.Log(rtt.INFO, "must have deals to update, skipping sector %d", update.SectorID)
			continue
		}

		if uint64(len(update.Deals)) > SectorDealsMax(info.SectorSize) {
			rt.Log(rtt.INFO, "more deals than policy allows, skipping sector %d", update.SectorID)
			continue
		}

		if update.Deadline >= WPoStPeriodDeadlines {
			rt.Log(rtt.INFO, "deadline %d not in range 0..%d, skipping sector %d", update.Deadline, WPoStPeriodDeadlines, update.SectorID)
			continue
		}

		if !update.NewSealedSectorCID.Defined() {
			rt.Log(rtt.INFO, "new sealed CID undefined, skipping sector %d", update.SectorID)
			continue
		}

		if update.NewSealedSectorCID.Prefix() != SealedCIDPrefix {
			rt.Log(rtt.INFO, "new sealed CID had wrong prefix %s, skipping sector %d", update.NewSealedSectorCID, update.SectorID)
			continue
		}

		// If the deadline is the current or next deadline to prove, don't allow updating sectors.
		// We assume that deadlines are immutable when being proven.
		if !deadlineIsMutable(stReadOnly.CurrentProvingPeriodStart(rt.CurrEpoch()), update.Deadline, rt.CurrEpoch()) {
			rt.Log(rtt.INFO, "cannot upgrade sectors in immutable deadline %d, skipping sector %d", update.Deadline, update.SectorID)
			continue
		}

		healthy, err := stReadOnly.CheckSectorActive(store, update.Deadline, update.Partition, update.SectorID, true)

		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "error checking sector health")

		if !healthy {
			rt.Log(rtt.INFO, "sector isn't healthy, skipping sector %d", update.SectorID)
			continue
		}

		sectorInfo, err := sectors.MustGet(update.SectorID)
		if err != nil {
			rt.Log(rtt.INFO, "failed to get sector, skipping sector %d", update.SectorID)
			continue
		}

		if len(sectorInfo.DealIDs) != 0 {
			rt.Log(rtt.INFO, "cannot update sector with deals, skipping sector %d", update.SectorID)
			continue
		}

		code := rt.Send(
			builtin.StorageMarketActorAddr,
			builtin.MethodsMarket.ActivateDeals,
			&market.ActivateDealsParams{
				DealIDs:      update.Deals,
				SectorExpiry: sectorInfo.Expiration,
			},
			abi.NewTokenAmount(0),
			&builtin.Discard{},
		)

		if code != exitcode.Ok {
			rt.Log(rtt.INFO, "failed to activate deals, skipping sector %d", update.SectorID)
			continue
		}

		validatedUpdates = append(validatedUpdates, &updateAndSectorInfo{
			update:     &update,
			sectorInfo: sectorInfo,
		})

		sectorsDeals = append(sectorsDeals, market.SectorDeals{DealIDs: update.Deals, SectorExpiry: sectorInfo.Expiration})
		sectorsDataSpec = append(sectorsDataSpec, &market.SectorDataSpec{
			SectorType: sectorInfo.SealProof,
			DealIDs:    update.Deals,
		})
	}

	builtin.RequireParam(rt, len(validatedUpdates) > 0, "no valid updates")

	// Errors past this point cause the ProveReplicaUpdates call to fail (no more skipping sectors)

	dealWeights := requestDealWeights(rt, sectorsDeals)
	builtin.RequirePredicate(rt, len(dealWeights.Sectors) == len(validatedUpdates), exitcode.ErrIllegalState,
		"deal weight request returned %d records, expected %d", len(dealWeights.Sectors), len(validatedUpdates))

	unsealedSectorCIDs := requestUnsealedSectorCIDs(rt, sectorsDataSpec...)
	builtin.RequirePredicate(rt, len(unsealedSectorCIDs) == len(validatedUpdates), exitcode.ErrIllegalState,
		"unsealed sector cid request returned %d records, expected %d", len(unsealedSectorCIDs), len(validatedUpdates))

	type updateWithDetails struct {
		update            *ReplicaUpdate
		sectorInfo        *SectorOnChainInfo
		dealWeight        market.SectorWeights
		unsealedSectorCID cid.Cid
	}

	// Group declarations by deadline
	declsByDeadline := map[uint64][]*updateWithDetails{}
	var deadlinesToLoad []uint64
	for i, updateWithSectorInfo := range validatedUpdates {
		if _, ok := declsByDeadline[updateWithSectorInfo.update.Deadline]; !ok {
			deadlinesToLoad = append(deadlinesToLoad, updateWithSectorInfo.update.Deadline)
		}
		declsByDeadline[updateWithSectorInfo.update.Deadline] = append(declsByDeadline[updateWithSectorInfo.update.Deadline], &updateWithDetails{
			update:            updateWithSectorInfo.update,
			sectorInfo:        updateWithSectorInfo.sectorInfo,
			dealWeight:        dealWeights.Sectors[i],
			unsealedSectorCID: unsealedSectorCIDs[i],
		})
	}

	rewRet := requestCurrentEpochBlockReward(rt)
	powRet := requestCurrentTotalPower(rt)

	succeededSectors := bitfield.New()
	var st State
	rt.StateTransaction(&st, func() {
		deadlines, err := st.LoadDeadlines(store)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadlines")

		newSectors := make([]*SectorOnChainInfo, 0)
		for _, dlIdx := range deadlinesToLoad {
			deadline, err := deadlines.LoadDeadline(store, dlIdx)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %d", dlIdx)

			partitions, err := deadline.PartitionsArray(store)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load partitions for deadline %d", dlIdx)

			quant := st.QuantSpecForDeadline(dlIdx)

			for _, updateWithDetails := range declsByDeadline[dlIdx] {
				updateProofType, err := updateWithDetails.sectorInfo.SealProof.RegisteredUpdateProof()
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "couldn't load update proof type")
				builtin.RequirePredicate(rt, updateWithDetails.update.UpdateProofType == updateProofType, exitcode.ErrIllegalArgument, "unsupported update proof type %d", updateWithDetails.update.UpdateProofType)

				err = rt.VerifyReplicaUpdate(
					proof.ReplicaUpdateInfo{
						UpdateProofType:      updateProofType,
						NewSealedSectorCID:   updateWithDetails.update.NewSealedSectorCID,
						OldSealedSectorCID:   updateWithDetails.sectorInfo.SealedCID,
						NewUnsealedSectorCID: updateWithDetails.unsealedSectorCID,
						Proof:                updateWithDetails.update.ReplicaProof,
					})

				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to verify replica proof for sector %d", updateWithDetails.sectorInfo.SectorNumber)

				newSectorInfo := *updateWithDetails.sectorInfo

				newSectorInfo.SealedCID = updateWithDetails.update.NewSealedSectorCID
				if newSectorInfo.SectorKeyCID == nil {
					newSectorInfo.SectorKeyCID = &updateWithDetails.sectorInfo.SealedCID
				}

				newSectorInfo.DealIDs = updateWithDetails.update.Deals
				newSectorInfo.Activation = rt.CurrEpoch()

				newSectorInfo.DealWeight = updateWithDetails.dealWeight.DealWeight
				newSectorInfo.VerifiedDealWeight = updateWithDetails.dealWeight.VerifiedDealWeight

				// compute initial pledge
				duration := updateWithDetails.sectorInfo.Expiration - rt.CurrEpoch()

				pwr := QAPowerForWeight(info.SectorSize, duration, newSectorInfo.DealWeight, newSectorInfo.VerifiedDealWeight)

				newSectorInfo.ReplacedDayReward = updateWithDetails.sectorInfo.ExpectedDayReward
				newSectorInfo.ExpectedDayReward = ExpectedRewardForPower(rewRet.ThisEpochRewardSmoothed, powRet.QualityAdjPowerSmoothed, pwr, builtin.EpochsInDay)
				newSectorInfo.ExpectedStoragePledge = ExpectedRewardForPower(rewRet.ThisEpochRewardSmoothed, powRet.QualityAdjPowerSmoothed, pwr, InitialPledgeProjectionPeriod)
				newSectorInfo.ReplacedSectorAge = maxEpoch(0, rt.CurrEpoch()-updateWithDetails.sectorInfo.Activation)

				initialPledgeAtUpgrade := InitialPledgeForPower(pwr, rewRet.ThisEpochBaselinePower, rewRet.ThisEpochRewardSmoothed,
					powRet.QualityAdjPowerSmoothed, rt.TotalFilCircSupply())

				if initialPledgeAtUpgrade.GreaterThan(updateWithDetails.sectorInfo.InitialPledge) {
					deficit := big.Sub(initialPledgeAtUpgrade, updateWithDetails.sectorInfo.InitialPledge)

					unlockedBalance, err := st.GetUnlockedBalance(rt.CurrentBalance())
					builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to calculate unlocked balance")
					builtin.RequirePredicate(rt, unlockedBalance.GreaterThanEqual(deficit), exitcode.ErrInsufficientFunds, "insufficient funds for new initial pledge requirement %s, available: %s, skipping sector %d",
						deficit, unlockedBalance, updateWithDetails.sectorInfo.SectorNumber)

					err = st.AddInitialPledge(deficit)
					builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add initial pledge")

					newSectorInfo.InitialPledge = initialPledgeAtUpgrade
				}

				var partition Partition
				found, err := partitions.Get(updateWithDetails.update.Partition, &partition)

				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deadline %v partition %v",
					updateWithDetails.update.Deadline, updateWithDetails.update.Partition)

				if !found {
					rt.Abortf(exitcode.ErrNotFound, "no such deadline %v partition %v", dlIdx, updateWithDetails.update.Partition)
				}

				partitionPowerDelta, partitionPledgeDelta, err := partition.ReplaceSectors(store,
					[]*SectorOnChainInfo{updateWithDetails.sectorInfo},
					[]*SectorOnChainInfo{&newSectorInfo},
					info.SectorSize,
					quant)

				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to replace sector at deadline %d partition %d", updateWithDetails.update.Deadline, updateWithDetails.update.Partition)

				powerDelta = powerDelta.Add(partitionPowerDelta)
				pledgeDelta = big.Add(pledgeDelta, partitionPledgeDelta)

				err = partitions.Set(updateWithDetails.update.Partition, &partition)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadline %v partition %v",
					updateWithDetails.update.Deadline,
					updateWithDetails.update.Partition)

				newSectors = append(newSectors, &newSectorInfo)
				succeededSectors.Set(uint64(newSectorInfo.SectorNumber))
			}

			deadline.Partitions, err = partitions.Root()
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save partitions for deadline %d", dlIdx)

			err = deadlines.UpdateDeadline(store, dlIdx, deadline)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadline %d", dlIdx)
		}

		successCount, err := succeededSectors.Count()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count succeededSectors")
		builtin.RequirePredicate(rt, successCount == uint64(len(validatedUpdates)), exitcode.ErrIllegalState, "unexpected successcount %d != %d", successCount, len(validatedUpdates))

		// Overwrite sector infos.
		err = sectors.Store(newSectors...)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to update sector infos")

		st.Sectors, err = sectors.Root()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save sectors")

		err = st.SaveDeadlines(store, deadlines)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save deadlines")

	})

	notifyPledgeChanged(rt, pledgeDelta)
	requestUpdatePower(rt, powerDelta)

	return &succeededSectors
}

//////////
// Cron //
//////////

//type CronEventPayload struct {
//	EventType CronEventType
//}
type CronEventPayload = miner0.CronEventPayload

type CronEventType = miner0.CronEventType

const (
	CronEventProvingDeadline          = miner0.CronEventProvingDeadline
	CronEventProcessEarlyTerminations = miner0.CronEventProcessEarlyTerminations
)

func (a Actor) OnDeferredCronEvent(rt Runtime, params *builtin.DeferredCronEventParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)

	var payload miner0.CronEventPayload
	err := payload.UnmarshalCBOR(bytes.NewBuffer(params.EventPayload))
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unmarshal miner cron payload into expected structure")

	switch payload.EventType {
	case CronEventProvingDeadline:
		handleProvingDeadline(rt, params.RewardSmoothed, params.QualityAdjPowerSmoothed)
	case CronEventProcessEarlyTerminations:
		if processEarlyTerminations(rt, params.RewardSmoothed, params.QualityAdjPowerSmoothed) {
			scheduleEarlyTerminationWork(rt)
		}
	default:
		rt.Log(rtt.ERROR, "onDeferredCronEvent invalid event type: %v", payload.EventType)
	}

	var st State
	rt.StateReadonly(&st)
	err = st.CheckBalanceInvariants(rt.CurrentBalance())
	builtin.RequireNoErr(rt, err, ErrBalanceInvariantBroken, "balance invariants broken")
	return nil
}

////////////////////////////////////////////////////////////////////////////////
// Utility functions & helpers
////////////////////////////////////////////////////////////////////////////////

// TODO: We're using the current power+epoch reward. Technically, we
// should use the power/reward at the time of termination.
// https://github.com/filecoin-project/specs-actors/v7/pull/648
func processEarlyTerminations(rt Runtime, rewardSmoothed smoothing.FilterEstimate, qualityAdjPowerSmoothed smoothing.FilterEstimate) (more bool) {
	store := adt.AsStore(rt)

	var (
		result           TerminationResult
		dealsToTerminate []market.OnMinerSectorsTerminateParams
		penalty          = big.Zero()
		pledgeDelta      = big.Zero()
	)

	var st State
	rt.StateTransaction(&st, func() {
		var err error
		result, more, err = st.PopEarlyTerminations(store, AddressedPartitionsMax, AddressedSectorsMax)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to pop early terminations")

		// Nothing to do, don't waste any time.
		// This can happen if we end up processing early terminations
		// before the cron callback fires.
		if result.IsEmpty() {
			rt.Log(rtt.INFO, "no early terminations (maybe cron callback hasn't happened yet?)")
			return
		}

		info := getMinerInfo(rt, &st)

		sectors, err := LoadSectors(store, st.Sectors)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sectors array")

		totalInitialPledge := big.Zero()
		dealsToTerminate = make([]market.OnMinerSectorsTerminateParams, 0, len(result.Sectors))
		err = result.ForEach(func(epoch abi.ChainEpoch, sectorNos bitfield.BitField) error {
			sectors, err := sectors.Load(sectorNos)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load sector infos")
			params := market.OnMinerSectorsTerminateParams{
				Epoch:   epoch,
				DealIDs: make([]abi.DealID, 0, len(sectors)), // estimate ~one deal per sector.
			}
			for _, sector := range sectors {
				params.DealIDs = append(params.DealIDs, sector.DealIDs...)
				totalInitialPledge = big.Add(totalInitialPledge, sector.InitialPledge)
			}
			penalty = big.Add(penalty, terminationPenalty(info.SectorSize, epoch,
				rewardSmoothed, qualityAdjPowerSmoothed, sectors))
			dealsToTerminate = append(dealsToTerminate, params)

			return nil
		})
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to process terminations")

		// Pay penalty
		err = st.ApplyPenalty(penalty)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to apply penalty")

		// Remove pledge requirement.
		err = st.AddInitialPledge(totalInitialPledge.Neg())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add initial pledge %v", totalInitialPledge.Neg())
		pledgeDelta = big.Sub(pledgeDelta, totalInitialPledge)

		// Use unlocked pledge to pay down outstanding fee debt
		penaltyFromVesting, penaltyFromBalance, err := st.RepayPartialDebtInPriorityOrder(store, rt.CurrEpoch(), rt.CurrentBalance())
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to pay penalty")
		penalty = big.Add(penaltyFromVesting, penaltyFromBalance)
		pledgeDelta = big.Sub(pledgeDelta, penaltyFromVesting)
	})

	// We didn't do anything, abort.
	if result.IsEmpty() {
		rt.Log(rtt.INFO, "no early terminations")
		return more
	}

	// Burn penalty.
	rt.Log(rtt.DEBUG, "storage provider %s penalized %s for sector termination", rt.Receiver(), penalty)
	burnFunds(rt, penalty, BurnMethodProcessEarlyTerminations)

	// Return pledge.
	notifyPledgeChanged(rt, pledgeDelta)

	// Terminate deals.
	for _, params := range dealsToTerminate {
		requestTerminateDeals(rt, params.Epoch, params.DealIDs)
	}

	// reschedule cron worker, if necessary.
	return more
}

// Invoked at the end of the last epoch for each proving deadline.
func handleProvingDeadline(rt Runtime,
	rewardSmoothed smoothing.FilterEstimate,
	qualityAdjPowerSmoothed smoothing.FilterEstimate) {
	currEpoch := rt.CurrEpoch()
	store := adt.AsStore(rt)

	hadEarlyTerminations := false

	powerDeltaTotal := NewPowerPairZero()
	penaltyTotal := abi.NewTokenAmount(0)
	pledgeDeltaTotal := abi.NewTokenAmount(0)

	var continueCron bool
	var st State
	rt.StateTransaction(&st, func() {
		{
			// Vest locked funds.
			// This happens first so that any subsequent penalties are taken
			// from locked vesting funds before funds free this epoch.
			newlyVested, err := st.UnlockVestedFunds(store, rt.CurrEpoch())
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to vest funds")
			pledgeDeltaTotal = big.Add(pledgeDeltaTotal, newlyVested.Neg())
		}

		{
			// Process pending worker change if any
			info := getMinerInfo(rt, &st)
			processPendingWorker(info, rt, &st)
		}

		{
			depositToBurn, err := st.CleanUpExpiredPreCommits(store, currEpoch)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to expire pre-committed sectors")

			err = st.ApplyPenalty(depositToBurn)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to apply penalty")
			rt.Log(rtt.DEBUG, "storage provider %s penalized %s for expired pre commits", rt.Receiver(), depositToBurn)
		}

		// Record whether or not we _had_ early terminations in the queue before this method.
		// That way, don't re-schedule a cron callback if one is already scheduled.
		hadEarlyTerminations = havePendingEarlyTerminations(rt, &st)

		{
			result, err := st.AdvanceDeadline(store, currEpoch)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to advance deadline")

			// Faults detected by this missed PoSt pay no penalty, but sectors that were already faulty
			// and remain faulty through this deadline pay the fault fee.
			penaltyTarget := PledgePenaltyForContinuedFault(
				rewardSmoothed,
				qualityAdjPowerSmoothed,
				result.PreviouslyFaultyPower.QA,
			)

			powerDeltaTotal = powerDeltaTotal.Add(result.PowerDelta)
			pledgeDeltaTotal = big.Add(pledgeDeltaTotal, result.PledgeDelta)

			err = st.ApplyPenalty(penaltyTarget)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to apply penalty")
			rt.Log(rtt.DEBUG, "storage provider %s penalized %s for continued fault", rt.Receiver(), penaltyTarget)

			penaltyFromVesting, penaltyFromBalance, err := st.RepayPartialDebtInPriorityOrder(store, currEpoch, rt.CurrentBalance())
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock penalty")
			penaltyTotal = big.Add(penaltyFromVesting, penaltyFromBalance)
			pledgeDeltaTotal = big.Sub(pledgeDeltaTotal, penaltyFromVesting)
		}

		continueCron = st.ContinueDeadlineCron()
		if !continueCron {
			st.DeadlineCronActive = false
		}
	})
	// Remove power for new faults, and burn penalties.
	requestUpdatePower(rt, powerDeltaTotal)
	burnFunds(rt, penaltyTotal, BurnMethodHandleProvingDeadline)
	notifyPledgeChanged(rt, pledgeDeltaTotal)

	// Schedule cron callback for next deadline's last epoch.
	if continueCron {
		newDlInfo := st.DeadlineInfo(currEpoch + 1)
		enrollCronEvent(rt, newDlInfo.Last(), &CronEventPayload{
			EventType: CronEventProvingDeadline,
		})
	} else {
		rt.Log(rtt.INFO, "miner %s going inactive, deadline cron discontinued", rt.Receiver())
	}

	// Record whether or not we _have_ early terminations now.
	hasEarlyTerminations := havePendingEarlyTerminations(rt, &st)

	// If we didn't have pending early terminations before, but we do now,
	// handle them at the next epoch.
	if !hadEarlyTerminations && hasEarlyTerminations {
		// First, try to process some of these terminations.
		if processEarlyTerminations(rt, rewardSmoothed, qualityAdjPowerSmoothed) {
			// If that doesn't work, just defer till the next epoch.
			scheduleEarlyTerminationWork(rt)
		}
		// Note: _don't_ process early terminations if we had a cron
		// callback already scheduled. In that case, we'll already have
		// processed AddressedSectorsMax terminations this epoch.
	}
}

// Check expiry is exactly *the epoch before* the start of a proving period.
func validateExpiration(rt Runtime, activation, expiration abi.ChainEpoch, sealProof abi.RegisteredSealProof) {
	// Expiration must be after activation. Check this explicitly to avoid an underflow below.
	if expiration <= activation {
		rt.Abortf(exitcode.ErrIllegalArgument, "sector expiration %v must be after activation (%v)", expiration, activation)
	}
	// expiration cannot be less than minimum after activation
	if expiration-activation < MinSectorExpiration {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid expiration %d, total sector lifetime (%d) must exceed %d after activation %d",
			expiration, expiration-activation, MinSectorExpiration, activation)
	}

	// expiration cannot exceed MaxSectorExpirationExtension from now
	if expiration > rt.CurrEpoch()+MaxSectorExpirationExtension {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid expiration %d, cannot be more than %d past current epoch %d",
			expiration, MaxSectorExpirationExtension, rt.CurrEpoch())
	}

	// total sector lifetime cannot exceed SectorMaximumLifetime for the sector's seal proof
	maxLifetime, err := builtin.SealProofSectorMaximumLifetime(sealProof)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "unrecognized seal proof type %d", sealProof)
	if expiration-activation > maxLifetime {
		rt.Abortf(exitcode.ErrIllegalArgument, "invalid expiration %d, total sector lifetime (%d) cannot exceed %d after activation %d",
			expiration, expiration-activation, maxLifetime, activation)
	}
}

func enrollCronEvent(rt Runtime, eventEpoch abi.ChainEpoch, callbackPayload *CronEventPayload) {
	payload := new(bytes.Buffer)
	err := callbackPayload.MarshalCBOR(payload)
	if err != nil {
		rt.Abortf(exitcode.ErrIllegalArgument, "failed to serialize payload: %v", err)
	}
	code := rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.MethodsPower.EnrollCronEvent,
		&power.EnrollCronEventParams{
			EventEpoch: eventEpoch,
			Payload:    payload.Bytes(),
		},
		abi.NewTokenAmount(0),
		&builtin.Discard{},
	)
	builtin.RequireSuccess(rt, code, "failed to enroll cron event")
}

func requestUpdatePower(rt Runtime, delta PowerPair) {
	if delta.IsZero() {
		return
	}
	code := rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.MethodsPower.UpdateClaimedPower,
		&power.UpdateClaimedPowerParams{
			RawByteDelta:         delta.Raw,
			QualityAdjustedDelta: delta.QA,
		},
		abi.NewTokenAmount(0),
		&builtin.Discard{},
	)
	builtin.RequireSuccess(rt, code, "failed to update power with %v", delta)
}

func requestTerminateDeals(rt Runtime, epoch abi.ChainEpoch, dealIDs []abi.DealID) {
	for len(dealIDs) > 0 {
		size := min64(cbg.MaxLength, uint64(len(dealIDs)))
		code := rt.Send(
			builtin.StorageMarketActorAddr,
			builtin.MethodsMarket.OnMinerSectorsTerminate,
			&market.OnMinerSectorsTerminateParams{
				Epoch:   epoch,
				DealIDs: dealIDs[:size],
			},
			abi.NewTokenAmount(0),
			&builtin.Discard{},
		)
		builtin.RequireSuccess(rt, code, "failed to terminate deals, exit code %v", code)
		dealIDs = dealIDs[size:]
	}
}

func scheduleEarlyTerminationWork(rt Runtime) {
	rt.Log(rtt.INFO, "scheduling early terminations with cron...")

	enrollCronEvent(rt, rt.CurrEpoch()+1, &CronEventPayload{
		EventType: CronEventProcessEarlyTerminations,
	})
}

func havePendingEarlyTerminations(rt Runtime, st *State) bool {
	// Record this up-front
	noEarlyTerminations, err := st.EarlyTerminations.IsEmpty()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count early terminations")
	return !noEarlyTerminations
}

func verifyWindowedPost(rt Runtime, challengeEpoch abi.ChainEpoch, sectors []*SectorOnChainInfo, proofs []proof.PoStProof) error {
	minerActorID, err := addr.IDFromAddress(rt.Receiver())
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "runtime provided bad receiver address %v", rt.Receiver())

	// Regenerate challenge randomness, which must match that generated for the proof.
	var addrBuf bytes.Buffer
	receiver := rt.Receiver()
	err = receiver.MarshalCBOR(&addrBuf)
	builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to marshal address for window post challenge")
	postRandomness := rt.GetRandomnessFromBeacon(crypto.DomainSeparationTag_WindowedPoStChallengeSeed, challengeEpoch, addrBuf.Bytes())

	sectorProofInfo := make([]proof.SectorInfo, len(sectors))
	for i, s := range sectors {
		sectorProofInfo[i] = proof.SectorInfo{
			SealProof:    s.SealProof,
			SectorNumber: s.SectorNumber,
			SealedCID:    s.SealedCID,
		}
	}

	// Get public inputs
	pvInfo := proof.WindowPoStVerifyInfo{
		Randomness:        abi.PoStRandomness(postRandomness),
		Proofs:            proofs,
		ChallengedSectors: sectorProofInfo,
		Prover:            abi.ActorID(minerActorID),
	}

	// Verify the PoSt ReplicaProof
	err = rt.VerifyPoSt(pvInfo)
	if err != nil {
		return fmt.Errorf("invalid PoSt %+v: %w", pvInfo, err)
	}
	return nil
}

// SealVerifyParams is the structure of information that must be sent with a
// message to commit a sector. Most of this information is not needed in the
// state tree but will be verified in sm.CommitSector. See SealCommitment for
// data stored on the state tree for each sector.
type SealVerifyStuff struct {
	SealedCID        cid.Cid        // CommR
	InteractiveEpoch abi.ChainEpoch // Used to derive the interactive PoRep challenge.
	abi.RegisteredSealProof
	Proof   []byte
	DealIDs []abi.DealID
	abi.SectorNumber
	SealRandEpoch abi.ChainEpoch // Used to tie the seal to a chain.
}

func getVerifyInfo(rt Runtime, params *SealVerifyStuff) *proof.SealVerifyInfo {
	if rt.CurrEpoch() <= params.InteractiveEpoch {
		rt.Abortf(exitcode.ErrForbidden, "too early to prove sector")
	}

	commDs := requestUnsealedSectorCIDs(rt, &market.SectorDataSpec{
		SectorType: params.RegisteredSealProof,
		DealIDs:    params.DealIDs,
	})

	minerActorID, err := addr.IDFromAddress(rt.Receiver())
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "runtime provided non-ID receiver address %v", rt.Receiver())

	buf := new(bytes.Buffer)
	receiver := rt.Receiver()
	err = receiver.MarshalCBOR(buf)
	builtin.RequireNoErr(rt, err, exitcode.ErrSerialization, "failed to marshal address for seal verification challenge")

	svInfoRandomness := rt.GetRandomnessFromTickets(crypto.DomainSeparationTag_SealRandomness, params.SealRandEpoch, buf.Bytes())
	svInfoInteractiveRandomness := rt.GetRandomnessFromBeacon(crypto.DomainSeparationTag_InteractiveSealChallengeSeed, params.InteractiveEpoch, buf.Bytes())

	return &proof.SealVerifyInfo{
		SealProof: params.RegisteredSealProof,
		SectorID: abi.SectorID{
			Miner:  abi.ActorID(minerActorID),
			Number: params.SectorNumber,
		},
		DealIDs:               params.DealIDs,
		InteractiveRandomness: abi.InteractiveSealRandomness(svInfoInteractiveRandomness),
		Proof:                 params.Proof,
		Randomness:            abi.SealRandomness(svInfoRandomness),
		SealedCID:             params.SealedCID,
		UnsealedCID:           commDs[0],
	}
}

// Requests the storage market actor compute the unsealed sector CID from a sector's deals.
func requestUnsealedSectorCIDs(rt Runtime, dataCommitmentInputs ...*market.SectorDataSpec) []cid.Cid {
	if len(dataCommitmentInputs) == 0 {
		return nil
	}
	var ret market.ComputeDataCommitmentReturn
	code := rt.Send(
		builtin.StorageMarketActorAddr,
		builtin.MethodsMarket.ComputeDataCommitment,
		&market.ComputeDataCommitmentParams{
			Inputs: dataCommitmentInputs,
		},
		abi.NewTokenAmount(0),
		&ret,
	)
	builtin.RequireSuccess(rt, code, "failed request for unsealed sector CIDs")
	builtin.RequireState(rt, len(dataCommitmentInputs) == len(ret.CommDs), "number of data commitments computed %d does not match number of data commitment inputs %d", len(ret.CommDs), len(dataCommitmentInputs))
	unsealedCIDs := make([]cid.Cid, len(ret.CommDs))
	for i, cbgCid := range ret.CommDs {
		unsealedCIDs[i] = cid.Cid(cbgCid)
	}
	return unsealedCIDs
}

func requestDealWeights(rt Runtime, sectors []market.SectorDeals) *market.VerifyDealsForActivationReturn {
	// Short-circuit if there are no deals in any of the sectors.
	dealCount := 0
	for _, sector := range sectors {
		dealCount += len(sector.DealIDs)
	}
	if dealCount == 0 {
		emptyResult := &market.VerifyDealsForActivationReturn{
			Sectors: make([]market.SectorWeights, len(sectors)),
		}
		for i := 0; i < len(sectors); i++ {
			emptyResult.Sectors[i] = market.SectorWeights{
				DealSpace:          0,
				DealWeight:         big.Zero(),
				VerifiedDealWeight: big.Zero(),
			}
		}
		return emptyResult
	}

	var dealWeights market.VerifyDealsForActivationReturn
	code := rt.Send(
		builtin.StorageMarketActorAddr,
		builtin.MethodsMarket.VerifyDealsForActivation,
		&market.VerifyDealsForActivationParams{
			Sectors: sectors,
		},
		abi.NewTokenAmount(0),
		&dealWeights,
	)
	builtin.RequireSuccess(rt, code, "failed to verify deals and get deal weight")
	return &dealWeights
}

// Requests the current epoch target block reward from the reward actor.
// return value includes reward, smoothed estimate of reward, and baseline power
func requestCurrentEpochBlockReward(rt Runtime) reward.ThisEpochRewardReturn {
	var ret reward.ThisEpochRewardReturn
	code := rt.Send(builtin.RewardActorAddr, builtin.MethodsReward.ThisEpochReward, nil, big.Zero(), &ret)
	builtin.RequireSuccess(rt, code, "failed to check epoch baseline power")
	return ret
}

// Requests the current network total power and pledge from the power actor.
func requestCurrentTotalPower(rt Runtime) *power.CurrentTotalPowerReturn {
	var pwr power.CurrentTotalPowerReturn
	code := rt.Send(builtin.StoragePowerActorAddr, builtin.MethodsPower.CurrentTotalPower, nil, big.Zero(), &pwr)
	builtin.RequireSuccess(rt, code, "failed to check current power")
	return &pwr
}

// Resolves an address to an ID address and verifies that it is address of an account or multisig actor.
func resolveControlAddress(rt Runtime, raw addr.Address) addr.Address {
	resolved, ok := rt.ResolveAddress(raw)
	if !ok {
		rt.Abortf(exitcode.ErrIllegalArgument, "unable to resolve address %v", raw)
	}
	ownerCode, ok := rt.GetActorCodeCID(resolved)
	if !ok {
		rt.Abortf(exitcode.ErrIllegalArgument, "no code for address %v", resolved)
	}
	if !builtin.IsPrincipal(ownerCode) {
		rt.Abortf(exitcode.ErrIllegalArgument, "owner actor type must be a principal, was %v", ownerCode)
	}
	return resolved
}

// Resolves an address to an ID address and verifies that it is address of an account actor with an associated BLS key.
// The worker must be BLS since the worker key will be used alongside a BLS-VRF.
func resolveWorkerAddress(rt Runtime, raw addr.Address) addr.Address {
	resolved, ok := rt.ResolveAddress(raw)
	if !ok {
		rt.Abortf(exitcode.ErrIllegalArgument, "unable to resolve address %v", raw)
	}
	workerCode, ok := rt.GetActorCodeCID(resolved)
	if !ok {
		rt.Abortf(exitcode.ErrIllegalArgument, "no code for address %v", resolved)
	}
	if workerCode != builtin.AccountActorCodeID {
		rt.Abortf(exitcode.ErrIllegalArgument, "worker actor type must be an account, was %v", workerCode)
	}

	if raw.Protocol() != addr.BLS {
		var pubkey addr.Address
		code := rt.Send(resolved, builtin.MethodsAccount.PubkeyAddress, nil, big.Zero(), &pubkey)
		builtin.RequireSuccess(rt, code, "failed to fetch account pubkey from %v", resolved)
		if pubkey.Protocol() != addr.BLS {
			rt.Abortf(exitcode.ErrIllegalArgument, "worker account %v must have BLS pubkey, was %v", resolved, pubkey.Protocol())
		}
	}
	return resolved
}

func burnFunds(rt Runtime, amt abi.TokenAmount, bt BurnMethod) {
	if amt.GreaterThan(big.Zero()) {
		rt.Log(rtt.DEBUG, "storage provder %s burn type %s burning %s", rt.Receiver(), bt, amt)
		code := rt.Send(builtin.BurntFundsActorAddr, builtin.MethodSend, nil, amt, &builtin.Discard{})
		builtin.RequireSuccess(rt, code, "failed to burn funds")
	}
}

func notifyPledgeChanged(rt Runtime, pledgeDelta abi.TokenAmount) {
	if !pledgeDelta.IsZero() {
		code := rt.Send(builtin.StoragePowerActorAddr, builtin.MethodsPower.UpdatePledgeTotal, &pledgeDelta, big.Zero(), &builtin.Discard{})
		builtin.RequireSuccess(rt, code, "failed to update total pledge")
	}
}

// Assigns proving period offset randomly in the range [0, WPoStProvingPeriod) by hashing
// the actor's address and current epoch.
func assignProvingPeriodOffset(myAddr addr.Address, currEpoch abi.ChainEpoch, hash func(data []byte) [32]byte) (abi.ChainEpoch, error) {
	offsetSeed := bytes.Buffer{}
	err := myAddr.MarshalCBOR(&offsetSeed)
	if err != nil {
		return 0, fmt.Errorf("failed to serialize address: %w", err)
	}

	err = binary.Write(&offsetSeed, binary.BigEndian, currEpoch)
	if err != nil {
		return 0, fmt.Errorf("failed to serialize epoch: %w", err)
	}

	digest := hash(offsetSeed.Bytes())
	var offset uint64
	err = binary.Read(bytes.NewBuffer(digest[:]), binary.BigEndian, &offset)
	if err != nil {
		return 0, fmt.Errorf("failed to interpret digest: %w", err)
	}

	offset = offset % uint64(WPoStProvingPeriod)
	return abi.ChainEpoch(offset), nil
}

// Computes the epoch at which a proving period should start such that it is greater than the current epoch, and
// has a defined offset from being an exact multiple of WPoStProvingPeriod.
// A miner is exempt from Winow PoSt until the first full proving period starts.
func currentProvingPeriodStart(currEpoch abi.ChainEpoch, offset abi.ChainEpoch) abi.ChainEpoch {
	currModulus := currEpoch % WPoStProvingPeriod
	var periodProgress abi.ChainEpoch // How far ahead is currEpoch from previous offset boundary.
	if currModulus >= offset {
		periodProgress = currModulus - offset
	} else {
		periodProgress = WPoStProvingPeriod - (offset - currModulus)
	}

	periodStart := currEpoch - periodProgress
	return periodStart
}

// Computes the deadline index for the current epoch for a given period start.
// currEpoch must be within the proving period that starts at provingPeriodStart to produce a valid index.
func currentDeadlineIndex(currEpoch abi.ChainEpoch, periodStart abi.ChainEpoch) uint64 {
	return uint64((currEpoch - periodStart) / WPoStChallengeWindow)
}

// Update worker address with pending worker key if exists and delay has passed
func processPendingWorker(info *MinerInfo, rt Runtime, st *State) {
	if info.PendingWorkerKey == nil || rt.CurrEpoch() < info.PendingWorkerKey.EffectiveAt {
		return
	}

	info.Worker = info.PendingWorkerKey.NewWorker
	info.PendingWorkerKey = nil

	err := st.SaveInfo(adt.AsStore(rt), info)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not save miner info")
}

// Computes deadline information for a fault or recovery declaration.
// If the deadline has not yet elapsed, the declaration is taken as being for the current proving period.
// If the deadline has elapsed, it's instead taken as being for the next proving period after the current epoch.
func declarationDeadlineInfo(periodStart abi.ChainEpoch, deadlineIdx uint64, currEpoch abi.ChainEpoch) (*dline.Info, error) {
	if deadlineIdx >= WPoStPeriodDeadlines {
		return nil, fmt.Errorf("invalid deadline %d, must be < %d", deadlineIdx, WPoStPeriodDeadlines)
	}

	deadline := NewDeadlineInfo(periodStart, deadlineIdx, currEpoch).NextNotElapsed()
	return deadline, nil
}

// Checks that a fault or recovery declaration at a specific deadline is outside the exclusion window for the deadline.
func validateFRDeclarationDeadline(deadline *dline.Info) error {
	if deadline.FaultCutoffPassed() {
		return fmt.Errorf("late fault or recovery declaration at %v", deadline)
	}
	return nil
}

// Validates that a partition contains the given sectors.
func validatePartitionContainsSectors(partition *Partition, sectors bitfield.BitField) error {
	// Check that the declared sectors are actually assigned to the partition.
	contains, err := BitFieldContainsAll(partition.Sectors, sectors)
	if err != nil {
		return xerrors.Errorf("failed to check sectors: %w", err)
	}
	if !contains {
		return xerrors.Errorf("not all sectors are assigned to the partition")
	}
	return nil
}

func terminationPenalty(sectorSize abi.SectorSize, currEpoch abi.ChainEpoch,
	rewardEstimate, networkQAPowerEstimate smoothing.FilterEstimate, sectors []*SectorOnChainInfo) abi.TokenAmount {
	totalFee := big.Zero()
	for _, s := range sectors {
		sectorPower := QAPowerForSector(sectorSize, s)
		fee := PledgePenaltyForTermination(s.ExpectedDayReward, currEpoch-s.Activation, s.ExpectedStoragePledge,
			networkQAPowerEstimate, sectorPower, rewardEstimate, s.ReplacedDayReward, s.ReplacedSectorAge)
		totalFee = big.Add(fee, totalFee)
	}
	return totalFee
}

func PowerForSector(sectorSize abi.SectorSize, sector *SectorOnChainInfo) PowerPair {
	return PowerPair{
		Raw: big.NewIntUnsigned(uint64(sectorSize)),
		QA:  QAPowerForSector(sectorSize, sector),
	}
}

// Returns the sum of the raw byte and quality-adjusted power for sectors.
func PowerForSectors(ssize abi.SectorSize, sectors []*SectorOnChainInfo) PowerPair {
	qa := big.Zero()
	for _, s := range sectors {
		qa = big.Add(qa, QAPowerForSector(ssize, s))
	}

	return PowerPair{
		Raw: big.Mul(big.NewIntUnsigned(uint64(ssize)), big.NewIntUnsigned(uint64(len(sectors)))),
		QA:  qa,
	}
}

func ConsensusFaultActive(info *MinerInfo, currEpoch abi.ChainEpoch) bool {
	// For penalization period to last for exactly finality epochs
	// consensus faults are active until currEpoch exceeds ConsensusFaultElapsed
	return currEpoch <= info.ConsensusFaultElapsed
}

func getMinerInfo(rt Runtime, st *State) *MinerInfo {
	info, err := st.GetInfo(adt.AsStore(rt))
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "could not read miner info")
	return info
}

func min64(a, b uint64) uint64 {
	if a < b {
		return a
	}
	return b
}

func max64(a, b uint64) uint64 {
	if a > b {
		return a
	}
	return b
}

func minEpoch(a, b abi.ChainEpoch) abi.ChainEpoch {
	if a < b {
		return a
	}
	return b
}

func maxEpoch(a, b abi.ChainEpoch) abi.ChainEpoch { //nolint:deadcode,unused
	if a > b {
		return a
	}
	return b
}

func checkControlAddresses(rt Runtime, controlAddrs []addr.Address) {
	if len(controlAddrs) > MaxControlAddresses {
		rt.Abortf(exitcode.ErrIllegalArgument, "control addresses length %d exceeds max control addresses length %d", len(controlAddrs), MaxControlAddresses)
	}
}

func checkPeerInfo(rt Runtime, peerID abi.PeerID, multiaddrs []abi.Multiaddrs) {
	if len(peerID) > MaxPeerIDLength {
		rt.Abortf(exitcode.ErrIllegalArgument, "peer ID size of %d exceeds maximum size of %d", peerID, MaxPeerIDLength)
	}

	totalSize := 0
	for _, ma := range multiaddrs {
		if len(ma) == 0 {
			rt.Abortf(exitcode.ErrIllegalArgument, "invalid empty multiaddr")
		}
		totalSize += len(ma)
	}
	if totalSize > MaxMultiaddrData {
		rt.Abortf(exitcode.ErrIllegalArgument, "multiaddr size of %d exceeds maximum of %d", totalSize, MaxMultiaddrData)
	}
}

Miner Collaterals

Most permissionless blockchain networks require upfront investment in resources in order to participate in the consensus. The more power an entity has on the network, the greater the share of total resources it needs to own, both in terms of physical resources and/or staked tokens (collateral).

Filecoin must achieve security via the dedication of resources. By design, Filecoin mining requires commercial hardware only (as opposed to ASIC hardware) that is cheap in amortized cost and easy to repurpose, which means the protocol cannot solely rely on the hardware as the capital investment at stake for attackers. Filecoin also uses upfront token collaterals, as in proof-of-stake protocols, proportional to the storage hardware committed. This gets the best of both worlds: attacking the network requires both acquiring and running the hardware, but it also requires acquiring large quantities of the token.

To satisfy the multiple needs for collateral in a way that is minimally burdensome to miners, Filecoin includes three different collateral mechanisms: initial pledge collateral, block reward as collateral, and storage deal provider collateral. The first is an initial commitment of filecoin that a miner must provide with each sector. The second is a mechanism to reduce the initial token commitment by vesting block rewards over time. The third aligns incentives between miner and client, and can allow miners to differentiate themselves in the market. The remainder of this subsection describes each in more detail.

Initial Pledge Collateral

Filecoin Miners must commit resources in order to participate in the economy; the protocol can use the minersʼ stake in the network to ensure that rational behavior benefits the network, rewarding the creation of value and penalizing malicious behavior via slashing. The pledge size is meant to adequately incentivize the fulfillment of a sectorʼs promised lifetime and provide sufficient consensus security.

Hence, the initial pledge function consists of two components: a storage pledge and a consensus pledge.

$SectorInitialPledge = SectorInitialStoragePledge + SectorInitialConsensusPledge$

The storage pledge protects the networkʼs quality-of-service for clients by providing starting collateral for the sector in the event of slashing. The storage pledge must be small enough to be feasible for miners joining the network, and large enough to collateralize storage against early faults, penalties, and fees. The vesting of block rewards and the use of unvested rewards as additional collateral reduces initial storage pledge without compromising the incentive alignment of the network. This is discussed in more depth in the following subsection. A balance is achieved by using an initial storage pledge amount approximately sufficient to cover 7 days worth of Sector fault fee and 1 Sector fault detection fee. This is denominated in the number of days of future rewards that a sector is expected to earn.

$SectorInitialStoragePledge = Estimated20DaysSectorBlockReward$

Since the storage pledge per sector is based on the expected block reward that sector will win, the storage pledge is independent of the networkʼs total storage. As a result, the total network storage pledge depends solely on future block reward. Thus, while the storage pledge provides a clean way to reason about the rationality of adding a sector, it does not provide sufficient long-term security guarantees to the network, making consensus takeovers less costly as the block reward decreases. As such, the second half of the initial pledge function, the consensus pledge, depends on both the amount of quality-adjusted power (QAP) added by the sector and the network circulating supply. The network targets approximately 30% of the network’s circulating supply locked up in initial consensus pledge when it is at or above the baseline. This is achieved with a small pledge share allocated to sectors based on their share of the networkʼs quality-adjusted power. Given an exponentially growing baseline, initial pledge per unit QAP should decrease over time, as should other mining costs.

$SectorInitialConsensusPledge = 30\% \times FILCirculatingSupply \times \frac{SectorQAP}{max(NetworkBaseline, NetworkQAP)}$

Block Reward Collateral

Clients need reliable storage. Under certain circumstances, miners might agree to a storage deal, then want to abandon it later as a result of increased costs or other market dynamics. A system where storage miners can freely or cheaply abandon files would drive clients away from Filecoin as a result of serious data loss and low quality of service. To make sure all the incentives are correctly aligned, Filecoin penalizes miners that fail to store files for the promised duration. As such, high collateral could be used to incentivize good behavior and improve the networkʼs quality of service. On the other hand, however, high collateral creates barriers to miners joining the network. Filecoin’s constructions have been designed such that they hit the right balance.

In order to reduce the upfront collateral that a miner needs to provide, the block reward is used as collateral. This allows the protocol to require a smaller but still meaningful initial pledge. Block rewards earned by a sector are subject to slashing if a sector is terminated before its expiration. However, due to chain state limitations, the protocol is unable to do accounting on a per sector level, which would be the most fair and accurate. Instead, the chain performs a per-miner level approximation. Sublinear vesting provides a strong guarantee that miners will always have the incentive to keep data stored until the deal expires and not earlier. An extreme vesting schedule would release all tokens that a sector earns only when the sector promise is fulfilled.

However, the protocol should provide liquidity for miners to support their mining operations, and releasing rewards all at once creates supply impulses to the network. Moreover, there should not be a disincentive for longer sector lifetime if the vesting duration also depends on the lifetime of the sector. As a result, a fixed duration linear vesting for the rewards that a miner earns after a short delay creates the necessary sub-linearity. This sub-linearity has been introduced by the Initial Pledge.

In general, fault fees are slashed first from the soonest-to-vest unvested block rewards followed by the minerʼs account balance. When a minerʼs balance is insufficient to cover their minimum requirements, their ability to participate in consensus, win block rewards, and grow storage power will be restricted until their balance is restored. Overall, this reduces the initial pledge requirement and creates a sufficient economic deterrent for faults without slashing the miner’s balance for every penalty.

Storage Deal Collateral

The third form of collateral is provided by the storage provider to collateralize deals. See the Storage Market Actor for further details on the Storage Deal Collateral.

Storage Proving

Filecoin Proving Subsystem

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"
import sealer "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/sealer"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type StorageProvingSubsystem struct {
    SectorSealer   sealer.SectorSealer
    PoStGenerator  poster.PoStGenerator

    VerifySeal(sv abi.SealVerifyInfo, pieceInfos [abi.PieceInfo]) union {ok bool, err error}
    ComputeUnsealedSectorCID(sectorSize UInt, pieceInfos [abi.PieceInfo]) union {unsealedSectorCID abi.UnsealedSectorCID, err error}

    ValidateBlock(block block.Block)

    // TODO: remove this?
    // GetPieceInclusionProof(pieceRef CID) union { PieceInclusionProofs, error }

    GenerateElectionPoStCandidates(
        challengeSeed  abi.PoStRandomness
        sectorIDs      [abi.SectorID]
    ) [abi.PoStCandidate]

    GenerateSurprisePoStCandidates(
        challengeSeed  abi.PoStRandomness
        sectorIDs      [abi.SectorID]
    ) [abi.PoStCandidate]

    CreateElectionPoStProof(
        challengeSeed  abi.PoStRandomness
        candidates     [abi.PoStCandidate]
    ) [abi.PoStProof]

    CreateSurprisePoStProof(
        challengeSeed  abi.PoStRandomness
        candidates     [abi.PoStCandidate]
    ) [abi.PoStProof]
}

Sector Poster

PoSt Generator object
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import sector_index "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index"

type UInt64 UInt

// TODO: move this to somewhere the blockchain can import
// candidates:
// - filproofs - may have to learn about Sectors (and if we move Seal stuff, Deals)
// - "blockchain/builtins" or something like that - a component in the blockchain that handles storage verification
type PoStSubmission struct {
    PostProof   abi.PoStProof
    ChainEpoch  abi.ChainEpoch
}

type PoStGenerator struct {
    SectorStore sector_index.SectorStore

    GeneratePoStCandidates(
        challengeSeed   abi.PoStRandomness
        candidateCount  UInt
        sectors         [abi.SectorID]
    ) [abi.PoStCandidate]

    CreateElectionPoStProof(
        randomness  abi.PoStRandomness
        witness     [abi.PoStCandidate]
    ) [abi.PoStProof]

    CreateSurprisePoStProof(
        randomness  abi.PoStRandomness
        witness     [abi.PoStCandidate]
    ) [abi.PoStProof]

    // FIXME: Verification shouldn't require a PoStGenerator. Move this.
    VerifyPoStProof(
        Proof          abi.PoStProof
        challengeSeed  abi.PoStRandomness
    ) bool
}
package poster

import (
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	filproofs "github.com/filecoin-project/specs/libraries/filcrypto/filproofs"
	util "github.com/filecoin-project/specs/util"
)

type Serialization = util.Serialization

// See "Proof-of-Spacetime Parameters" Section
// TODO: Unify with orient model.
const POST_CHALLENGE_DEADLINE = uint(480)

func (pg *PoStGenerator_I) GeneratePoStCandidates(challengeSeed abi.PoStRandomness, candidateCount int, sectors []abi.SectorID) []abi.PoStCandidate {
	// Question: Should we pass metadata into FilProofs so it can interact with SectorStore directly?
	// Like this:
	// PoStReponse := SectorStorageSubsystem.GeneratePoSt(sectorSize, challenge, faults, sectorsMetatada);

	// Question: Or should we resolve + manifest trees here and pass them in?
	// Like this:
	// trees := sectorsMetadata.map(func(md) { SectorStorage.GetMerkleTree(md.MerkleTreePath) });
	// Done this way, we redundantly pass the tree paths in the metadata. At first thought, the other way
	// seems cleaner.
	// PoStReponse := SectorStorageSubsystem.GeneratePoSt(sectorSize, challenge, faults, sectorsMetadata, trees);

	// For now, dodge this by passing the whole SectorStore. Once we decide how we want to represent this, we can narrow the call.

	return filproofs.GenerateElectionPoStCandidates(challengeSeed, sectors, candidateCount, pg.SectorStore())
}

func (pg *PoStGenerator_I) CreateElectionPoStProof(randomness abi.PoStRandomness, postCandidates []abi.PoStCandidate) []abi.PoStProof {
	var privateProofs []abi.PrivatePoStCandidateProof

	for _, candidate := range postCandidates {
		privateProofs = append(privateProofs, candidate.PrivateProof)
	}

	return filproofs.CreateElectionPoStProof(privateProofs, randomness)
}

func (pg *PoStGenerator_I) CreateSurprisePoStProof(randomness abi.PoStRandomness, postCandidates []abi.PoStCandidate) []abi.PoStProof {
	var privateProofs []abi.PrivatePoStCandidateProof

	for _, candidate := range postCandidates {
		privateProofs = append(privateProofs, candidate.PrivateProof)
	}

	return filproofs.CreateSurprisePoStProof(privateProofs, randomness)
}

Sector Sealer

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import file "github.com/filecoin-project/specs/systems/filecoin_files/file"
import addr "github.com/filecoin-project/go-address"

type SealInputs struct {
    SectorSize       abi.SectorSize
    RegisteredProof  abi.RegisteredProof  // FIXME: Ensure this is provided.
    SectorID         abi.SectorID
    MinerID          addr.Address
    RandomSeed       abi.SealRandomness  // This should be derived from SEAL_EPOCH = CURRENT_EPOCH - FINALITY.
    UnsealedPath     file.Path
    SealedPath       file.Path
    DealIDs          [abi.DealID]
}

type CreateSealProofInputs struct {
    SectorID               abi.SectorID
    RegisteredProof        abi.RegisteredProof
    InteractiveRandomSeed  abi.InteractiveSealRandomness
    SealedPaths            [file.Path]
    SealOutputs
}

type SealOutputs struct {
    ProofAuxTmp sector.ProofAuxTmp
}

type CreateSealProofOutputs struct {
    SealInfo  abi.SealVerifyInfo
    ProofAux  sector.PersistentProofAux
}

type SectorSealer struct {
    SealSector() union {so SealOutputs, err error}
    CreateSealProof(si CreateSealProofInputs) union {so CreateSealProofOutputs, err error}

    MaxUnsealedBytesPerSector(SectorSize UInt) UInt
}

Markets

Filecoin is a consensus protocol, a data-storage platform, and a marketplace for storing and retrieving data. There are two major components to Filecoin markets, the storage market and the retrieval market. While storage and retrieval negotiations for both the storage and the retrieval markets are taking place primarily off the blockchain (at least in the current version of Filecoin), storage deals made in the storage market will be published on-chain and will be enforced by the protocol. Storage deal negotiation and order matching are expected to happen off-chain in the first version of Filecoin. Retrieval deals are also negotiated off-chain and executed with micropayments between transacting parties in payment channels.

Even though most of the market actions happen off the blockchain, there are on-chain invariants that create economic structure for network success and allow for positive emergent behavior. You can read more about the relationship between on-chain deals and storage power in Storage Power Consensus.

Storage Market in Filecoin

Storage Market subsystem is the data entry point into the network. Storage miners can earn power from data stored in a storage deal and all deals live on the Filecoin network. Specific deal negotiation process happens off chain, clients and miners enter a storage deal after an agreement has been reached and post storage deals on the Filecoin network to earn block rewards and get paid for storing the data in the storage deal. A deal is only valid when it is posted on chain with signatures from both parties and at the time of posting, there are sufficient balances for both parties locked up to honor the deal in terms of deal price and deal collateral.

Terminology

  • StorageClient - The party that wants to make a deal to store data
  • StorageProvider - The party that will store the data in exchange for payment. A storage miner.
  • StorageMarketActor - The on-chain component of deals. The StorageMarketActor is analogous to an escrow and a ledger for all deals made.
  • StorageAsk - The current price and parameters a miner is currently offering for storage (analogous to an Ask in a financial market)
  • StorageDealProposal - A proposal for a storage deal, signed only by the - Storage client
  • StorageDeal - A storage deal proposal with a counter signature from the Provider, which then goes on-chain.

Deal Flow

The lifecycle for a deal within the storage market contains distinct phases:

  1. Discovery - The client identifies miners and determines their current asks.
  2. Negotiation (out of band) - Both parties come to an agreement about the terms of the deal, each party commits funds to the deal and data is transferred from the client to the provider.
  3. Publishing - The deal is published on chain, making the storage provider publicly accountable for the deal.
  4. Handoff - Once the deal is published, it is handed off and handled by the Storage Mining Subsystem. The Storage Mining Subsystem will add the data corresponding to the deal to a sector, seal the sector, and tell the Storage Market Actor that the deal is in a sector, thereby marking the deal as active.

From that point on, the deal is handled by the Storage Mining Subsystem, which communicates with the Storage Market Actor in order to process deal payments. See Storage Mining Subsystem for more details.

The following diagram outlines the phases of deal flow within the storage market in detail:

Storage Market Deal Flow

Discovery

Discovery is the client process of identifying storage providers (i.e. a miner) who (subject to agreement on the deal’s terms) are offering to store the client’s data. There are many ways which a client can use to identify a provider to store their data. The list below outlines the minimum discovery services a filecoin implementation MUST provide. As the network evolves, third parties may build systems that supplement or enhance these services.

Discovery involves identifying providers and determining their current StorageAsk. The steps are as follows:

  1. A client queries the chain to retrieve a list of Storage Miner Actors who have registerd as miners with the StoragePowerActor.
  2. A client may perform additional queries to each Storage Miner Actor to determine their properties. Among others, these properties can include worker address, sector size, libp2p Multiaddress etc.
  3. Once the client identifies potentially suitable providers, it sends a direct libp2p message using the Storage Query Protocol to get each potential provider’s current StorageAsk.
  4. Miners respond on the AskProtocol with a signed version of their current StorageAsk.

A StorageAsk contains all the properties that a client will need to determine if a given provider will meet its needs for storage at this moment. Providers should update their asks frequently to ensure the information they are providing to clients is up to date.

Negotiation

Negotiation is the out-of-band process during which a storage client and a storage provider come to an agreement about a storage deal and reach the point where a deal is published on chain.

Negotiation begins once a client has discovered a miner whose StorageAsk meets their desired criteria. The recommended order of operations for negotiating and publishing a deal is as follows:

  1. In order to propose a storage deal, the StorageClient calculates the piece commitment (CommP) for the data it intends to store. This is neccesary so that the StorageProvider can verify that the data the StorageClient sends to be stored matches the CommP in the StorageDealProposal. For more detail about the relationship between payloads, pieces, and CommP see Piece.
  2. Before sending a proposal to the provider, the StorageClient adds funds for a deal, as necessary, to the StorageMarketActor (by calling AddBalance).
  3. The StorageClient now creates a StorageDealProposal and sends the proposal and the CID for the root of the data payload to be stored to the StorageProvider using the Storage Deal Protocol.

From this point onwards, execution moves to the StorageProvider.

  1. The StorageProvider inspects the deal to verify that the deal’s parameters match its own internal criteria (such as price, piece size, deal duration, etc). The StorageProvider rejects the proposal if the parameters don’t match its own criteria by sending a rejection to the client over the Storage Deal Protocol.
  2. The StorageProvider queries the StorageMarketActor to verify the StorageClient has deposited enough funds to make the deal (i.e. the client’s balance is greater than the total storage price) and rejects the proposal if it hasn’t.
  3. If all criteria are met, the StorageProvider responds using the Storage Deal Protocol to indicate an intent to accept the deal.

From this point onwards execution moves back to the StorageClient.

  1. The StorageClient opens a push request for the payload data using the Data Transfer Module, and sends the request to the provider along with a voucher containing the CID for the StorageDealProposal.
  2. The StorageProvider checks the voucher and verifies that the CID matches the storage deal proposal it has received and verified but not put on chain already. If so, it accepts the data transfer request from the StorageClient.
  3. The Data Transfer Module now transfers the payload data to be stored from the StorageClient to the StorageProvider using GraphSync.
  4. Once complete, the Data Transfer Module notifies the StorageProvider.
  5. The StorageProvider recalculates the piece commitment (CommP) from the data transfer that just completed and verifies it matches the piece commitment in the StorageDealProposal.

Publishing

Data is now transferred, both parties have agreed, and it’s time to publish the deal. Given that the counter signature on a deal proposal is a standard message signature by the provider and the signed deal is an on-chain message, it is usually the StorageProvider that publishes the deal. However, if StorageProvider decides to send this signed on-chain message to the client before calling PublishStorageDeal then the client can publish the deal on-chain. The client’s funds are not locked until the deal is published and a published deal that is not activated within some pre-defined window will result in an on-chain penalty.

  1. First, the StorageProvider adds collateral for the deal as needed to the StorageMarketActor (using AddBalance).
  2. Then, the StorageProvider prepares and signs the on-chain StorageDeal message with the StorageDealProposal signed by the client and its own signature. It can now either send this message back to the client or call PublishStorageDeals on the StorageMarketActor to publish the deal. It is recommended for StorageProvider to send back the signed message before PublishStorageDeals is called.
  3. After calling PublishStorageDeals, the StorageProvider sends a message to the StorageClient on the Storage Deal Protocol with the CID of the message that it is putting on chain for convenience.
  4. If all goes well, the StorageMarketActor responds with an on-chain DealID for the published deal.

Finally, the StorageClient verifies the deal.

  1. The StorageClient queries the node for the CID of the message published on chain (sent by the provider). It then inspects the message parameters to make sure they match the previously agreed deal.

Handoff

Now that a deal is published, it needs to be stored, sealed, and proven in order for the provider to be paid. See Storage Deal for more information about how deal payments are made. These later stages of a deal are handled by the Storage Mining Subsystem. So the final task for the Storage Market is to handoff to the Storage Mining Subsystem.

  1. The StorageProvider writes the serialized, padded piece to a shared Filestore.
  2. The StorageProvider calls HandleStorageDeal on the StorageMiner with the published StorageDeal and filestore path (in Go this is the io.Reader).

A note regarding the order of operations: the only requirement to publish a storage deal with the StorageMarketActor is that the StorageDealProposal is signed by the StorageClient, the publish message is signed by the StorageProvider, and both parties have deposited adequate funds/collateral in the StorageMarketActor. As such, it’s not required that the steps listed above happen in this exact order. However, the above order is recommended because it generally minimizes the ability of either party to act maliciously.

Data Representation in the Storage Market

Data submitted to the Filecoin network go through several transformations before they come to the format at which the StorageProvider stores it. Here we provide a summary of these transformations.

  1. When a piece of data, or file is submitted to Filecoin (in some raw system format) it is transformed into a UnixFS DAG style data representation (in case it is not in this format already, e.g., from IPFS-based applications). The hash that represents the root of the IPLD DAG of the UnixFS file is the Payload CID, which is used in the Retrieval Market.
  2. In order to make a Filecoin Piece the UnixFS IPLD DAG is serialised into a .car file, which is also raw bytes.
  3. The resulting .car file is padded with some extra data.
  4. The next step is to calculate the Merkle root out of the hashes of individual Pieces. The resulting root of the Merkle tree is the Piece CID. This is also referred to as CommP. Note that at this stage the data is still unsealed.
  5. At this point, the Piece is included in a Sector together with data from other deals. The StorageProvider then calculates Merkle root for all the Pieces inside the sector. The root of this tree is CommD. This is the unsealed sector CID.
  6. The StorageProvider is then sealing the sector and the root of the resulting Merkle root is the CommR.

The following data types are unique to the Storage Market:

package storagemarket

import (
	"fmt"
	"time"

	"github.com/ipfs/go-cid"
	logging "github.com/ipfs/go-log/v2"
	"github.com/libp2p/go-libp2p/core/peer"
	ma "github.com/multiformats/go-multiaddr"
	cbg "github.com/whyrusleeping/cbor-gen"

	"github.com/filecoin-project/go-address"
	datatransfer "github.com/filecoin-project/go-data-transfer/v2"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/builtin/v9/market"
	"github.com/filecoin-project/go-state-types/crypto"

	"github.com/filecoin-project/go-fil-markets/filestore"
)

var log = logging.Logger("storagemrkt")

//go:generate cbor-gen-for --map-encoding ClientDeal MinerDeal Balance SignedStorageAsk StorageAsk DataRef ProviderDealState DealStages DealStage Log

// The ID for the libp2p protocol for proposing storage deals.
const DealProtocolID101 = "/fil/storage/mk/1.0.1"
const DealProtocolID110 = "/fil/storage/mk/1.1.0"
const DealProtocolID111 = "/fil/storage/mk/1.1.1"

// AskProtocolID is the ID for the libp2p protocol for querying miners for their current StorageAsk.
const OldAskProtocolID = "/fil/storage/ask/1.0.1"
const AskProtocolID = "/fil/storage/ask/1.1.0"

// DealStatusProtocolID is the ID for the libp2p protocol for querying miners for the current status of a deal.
const OldDealStatusProtocolID = "/fil/storage/status/1.0.1"
const DealStatusProtocolID = "/fil/storage/status/1.1.0"

// Balance represents a current balance of funds in the StorageMarketActor.
type Balance struct {
	Locked    abi.TokenAmount
	Available abi.TokenAmount
}

// StorageAsk defines the parameters by which a miner will choose to accept or
// reject a deal. Note: making a storage deal proposal which matches the miner's
// ask is a precondition, but not sufficient to ensure the deal is accepted (the
// storage provider may run its own decision logic).
type StorageAsk struct {
	// Price per GiB / Epoch
	Price         abi.TokenAmount
	VerifiedPrice abi.TokenAmount

	MinPieceSize abi.PaddedPieceSize
	MaxPieceSize abi.PaddedPieceSize
	Miner        address.Address
	Timestamp    abi.ChainEpoch
	Expiry       abi.ChainEpoch
	SeqNo        uint64
}

// SignedStorageAsk is an ask signed by the miner's private key
type SignedStorageAsk struct {
	Ask       *StorageAsk
	Signature *crypto.Signature
}

// SignedStorageAskUndefined represents the empty value for SignedStorageAsk
var SignedStorageAskUndefined = SignedStorageAsk{}

// StorageAskOption allows custom configuration of a storage ask
type StorageAskOption func(*StorageAsk)

// MinPieceSize configures a minimum piece size of a StorageAsk
func MinPieceSize(minPieceSize abi.PaddedPieceSize) StorageAskOption {
	return func(sa *StorageAsk) {
		sa.MinPieceSize = minPieceSize
	}
}

// MaxPieceSize configures maxiumum piece size of a StorageAsk
func MaxPieceSize(maxPieceSize abi.PaddedPieceSize) StorageAskOption {
	return func(sa *StorageAsk) {
		sa.MaxPieceSize = maxPieceSize
	}
}

// StorageAskUndefined represents an empty value for StorageAsk
var StorageAskUndefined = StorageAsk{}

type ClientDealProposal = market.ClientDealProposal

// MinerDeal is the local state tracked for a deal by a StorageProvider
type MinerDeal struct {
	ClientDealProposal
	ProposalCid           cid.Cid
	AddFundsCid           *cid.Cid
	PublishCid            *cid.Cid
	Miner                 peer.ID
	Client                peer.ID
	State                 StorageDealStatus
	PiecePath             filestore.Path
	MetadataPath          filestore.Path
	SlashEpoch            abi.ChainEpoch
	FastRetrieval         bool
	Message               string
	FundsReserved         abi.TokenAmount
	Ref                   *DataRef
	AvailableForRetrieval bool

	DealID       abi.DealID
	CreationTime cbg.CborTime

	TransferChannelId *datatransfer.ChannelID
	SectorNumber      abi.SectorNumber

	InboundCAR string
}

// NewDealStages creates a new DealStages object ready to be used.
// EXPERIMENTAL; subject to change.
func NewDealStages() *DealStages {
	return &DealStages{}
}

// DealStages captures a timeline of the progress of a deal, grouped by stages.
// EXPERIMENTAL; subject to change.
type DealStages struct {
	// Stages contains an entry for every stage that the deal has gone through.
	// Each stage then contains logs.
	Stages []*DealStage
}

// DealStages captures data about the execution of a deal stage.
// EXPERIMENTAL; subject to change.
type DealStage struct {
	// Human-readable fields.
	// TODO: these _will_ need to be converted to canonical representations, so
	//  they are machine readable.
	Name             string
	Description      string
	ExpectedDuration string

	// Timestamps.
	// TODO: may be worth adding an exit timestamp. It _could_ be inferred from
	//  the start of the next stage, or from the timestamp of the last log line
	//  if this is a terminal stage. But that's non-determistic and it relies on
	//  assumptions.
	CreatedTime cbg.CborTime
	UpdatedTime cbg.CborTime

	// Logs contains a detailed timeline of events that occurred inside
	// this stage.
	Logs []*Log
}

// Log represents a point-in-time event that occurred inside a deal stage.
// EXPERIMENTAL; subject to change.
type Log struct {
	// Log is a human readable message.
	//
	// TODO: this _may_ need to be converted to a canonical data model so it
	//  is machine-readable.
	Log string

	UpdatedTime cbg.CborTime
}

// GetStage returns the DealStage object for a named stage, or nil if not found.
//
// TODO: the input should be a strongly-typed enum instead of a free-form string.
// TODO: drop Get from GetStage to make this code more idiomatic. Return a
// second ok boolean to make it even more idiomatic.
// EXPERIMENTAL; subject to change.
func (ds *DealStages) GetStage(stage string) *DealStage {
	if ds == nil {
		return nil
	}

	for _, s := range ds.Stages {
		if s.Name == stage {
			return s
		}
	}

	return nil
}

// AddStageLog adds a log to the specified stage, creating the stage if it
// doesn't exist yet.
// EXPERIMENTAL; subject to change.
func (ds *DealStages) AddStageLog(stage, description, expectedDuration, msg string) {
	if ds == nil {
		return
	}

	log.Debugf("adding log for stage <%s> msg <%s>", stage, msg)

	now := curTime()
	st := ds.GetStage(stage)
	if st == nil {
		st = &DealStage{
			CreatedTime: now,
		}
		ds.Stages = append(ds.Stages, st)
	}

	st.Name = stage
	st.Description = description
	st.ExpectedDuration = expectedDuration
	st.UpdatedTime = now
	if msg != "" && (len(st.Logs) == 0 || st.Logs[len(st.Logs)-1].Log != msg) {
		// only add the log if it's not a duplicate.
		st.Logs = append(st.Logs, &Log{msg, now})
	}
}

// AddLog adds a log inside the DealStages object of the deal.
// EXPERIMENTAL; subject to change.
func (d *ClientDeal) AddLog(msg string, a ...interface{}) {
	if len(a) > 0 {
		msg = fmt.Sprintf(msg, a...)
	}

	stage := DealStates[d.State]
	description := DealStatesDescriptions[d.State]
	expectedDuration := DealStatesDurations[d.State]

	d.DealStages.AddStageLog(stage, description, expectedDuration, msg)
}

// ClientDeal is the local state tracked for a deal by a StorageClient
type ClientDeal struct {
	market.ClientDealProposal
	ProposalCid       cid.Cid
	AddFundsCid       *cid.Cid
	State             StorageDealStatus
	Miner             peer.ID
	MinerWorker       address.Address
	DealID            abi.DealID
	DataRef           *DataRef
	Message           string
	DealStages        *DealStages
	PublishMessage    *cid.Cid
	SlashEpoch        abi.ChainEpoch
	PollRetryCount    uint64
	PollErrorCount    uint64
	FastRetrieval     bool
	FundsReserved     abi.TokenAmount
	CreationTime      cbg.CborTime
	TransferChannelID *datatransfer.ChannelID
	SectorNumber      abi.SectorNumber
}

// StorageProviderInfo describes on chain information about a StorageProvider
// (use QueryAsk to determine more specific deal parameters)
type StorageProviderInfo struct {
	Address    address.Address // actor address
	Owner      address.Address
	Worker     address.Address // signs messages
	SectorSize uint64
	PeerID     peer.ID
	Addrs      []ma.Multiaddr
}

// ProposeStorageDealResult returns the result for a proposing a deal
type ProposeStorageDealResult struct {
	ProposalCid cid.Cid
}

// ProposeStorageDealParams describes the parameters for proposing a storage deal
type ProposeStorageDealParams struct {
	Addr          address.Address
	Info          *StorageProviderInfo
	Data          *DataRef
	StartEpoch    abi.ChainEpoch
	EndEpoch      abi.ChainEpoch
	Price         abi.TokenAmount
	Collateral    abi.TokenAmount
	Rt            abi.RegisteredSealProof
	FastRetrieval bool
	VerifiedDeal  bool
}

const (
	// TTGraphsync means data for a deal will be transferred by graphsync
	TTGraphsync = "graphsync"

	// TTManual means data for a deal will be transferred manually and imported
	// on the provider
	TTManual = "manual"
)

// DataRef is a reference for how data will be transferred for a given storage deal
type DataRef struct {
	TransferType string
	Root         cid.Cid

	PieceCid     *cid.Cid              // Optional for non-manual transfer, will be recomputed from the data if not given
	PieceSize    abi.UnpaddedPieceSize // Optional for non-manual transfer, will be recomputed from the data if not given
	RawBlockSize uint64                // Optional: used as the denominator when calculating transfer %
}

// ProviderDealState represents a Provider's current state of a deal
type ProviderDealState struct {
	State         StorageDealStatus
	Message       string
	Proposal      *market.DealProposal
	ProposalCid   *cid.Cid
	AddFundsCid   *cid.Cid
	PublishCid    *cid.Cid
	DealID        abi.DealID
	FastRetrieval bool
}

func curTime() cbg.CborTime {
	now := time.Now()
	return cbg.CborTime(time.Unix(0, now.UnixNano()).UTC())
}

Details about StorageDealProposal and StorageDeal (which are used in the Storage Market and elsewhere) specifically can be found in Storage Deal.

Protocols

Name: Storage Query Protocol
Protocol ID: /fil/<network-name>/storage/ask/1.0.1

Request: CBOR Encoded AskProtocolRequest Data Structure Response: CBOR Encoded AskProtocolResponse Data Structure

Name: Storage Deal Protocol
Protocol ID: /fil/<network-name>/storage/mk/1.0.1

Request: CBOR Encoded DealProtocolRequest Data Structure Response: CBOR Encoded DealProtocolResponse Data Structure

Storage Provider

The StorageProvider is a module that handles incoming queries for Asks and proposals for Deals from a StorageClient. It also tracks deals as they move through the deal flow, handling off chain actions during the negotiation phases of the deal and ultimately telling the StorageMarketActor to publish on chain. The StorageProvider’s last action is to handoff a published deal for storage and sealing to the Storage Mining Subsystem. Note that any address registered as a StorageMarketParticipant with the StorageMarketActor can be used with the StorageClient.

It is worth highlighting that a single participant can be a StorageClient, StorageProvider, or both at the same time.

Because most of what a Storage Provider does is respond to actions initiated by a StorageClient, most of its public facing methods relate to getting current status on deals, as opposed to initiating new actions. However, a user of the StorageProvider module can update the current Ask for the provider.

package storagemarket

import (
	"context"
	"io"

	"github.com/ipfs/go-cid"

	"github.com/filecoin-project/go-state-types/abi"

	"github.com/filecoin-project/go-fil-markets/shared"
)

// ProviderSubscriber is a callback that is run when events are emitted on a StorageProvider
type ProviderSubscriber func(event ProviderEvent, deal MinerDeal)

// StorageProvider provides an interface to the storage market for a single
// storage miner.
type StorageProvider interface {

	// Start initializes deal processing on a StorageProvider and restarts in progress deals.
	// It also registers the provider with a StorageMarketNetwork so it can receive incoming
	// messages on the storage market's libp2p protocols
	Start(ctx context.Context) error

	// OnReady registers a listener for when the provider comes on line
	OnReady(shared.ReadyFunc)

	// Stop terminates processing of deals on a StorageProvider
	Stop() error

	// SetAsk configures the storage miner's ask with the provided prices (for unverified and verified deals),
	// duration, and options. Any previously-existing ask is replaced.
	SetAsk(price abi.TokenAmount, verifiedPrice abi.TokenAmount, duration abi.ChainEpoch, options ...StorageAskOption) error

	// GetAsk returns the storage miner's ask, or nil if one does not exist.
	GetAsk() *SignedStorageAsk

	// GetLocalDeal gets a deal by signed proposal cid
	GetLocalDeal(cid cid.Cid) (MinerDeal, error)

	// LocalDealCount gets the number of local deals
	LocalDealCount() (int, error)

	// ListLocalDeals lists deals processed by this storage provider
	ListLocalDeals() ([]MinerDeal, error)

	// ListLocalDealsPage lists deals by creation time descending, starting
	// at the deal with the given signed proposal cid, skipping offset deals
	// and returning up to limit deals
	ListLocalDealsPage(startPropCid *cid.Cid, offset int, limit int) ([]MinerDeal, error)

	// AddStorageCollateral adds storage collateral
	AddStorageCollateral(ctx context.Context, amount abi.TokenAmount) error

	// GetStorageCollateral returns the current collateral balance
	GetStorageCollateral(ctx context.Context) (Balance, error)

	// ImportDataForDeal manually imports data for an offline storage deal
	ImportDataForDeal(ctx context.Context, propCid cid.Cid, data io.Reader) error

	// SubscribeToEvents listens for events that happen related to storage deals on a provider
	SubscribeToEvents(subscriber ProviderSubscriber) shared.Unsubscribe

	RetryDealPublishing(propCid cid.Cid) error

	AnnounceDealToIndexer(ctx context.Context, proposalCid cid.Cid) error

	AnnounceAllDealsToIndexer(ctx context.Context) error
}

Storage Client

The StorageClient is a module that discovers miners, determines their asks, and proposes deals to StorageProviders. It also tracks deals as they move through the deal flow. Note that any address registered as a StorageMarketParticipant with the StorageMarketActor can be used with the StorageClient.

Recall that a single participant can be a StorageClient, StorageProvider, or both at the same time.

package storagemarket

import (
	"context"

	bstore "github.com/ipfs/boxo/blockstore"
	"github.com/ipfs/go-cid"

	"github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"

	"github.com/filecoin-project/go-fil-markets/shared"
)

type PayloadCID = cid.Cid

// BlockstoreAccessor is used by the storage market client to get a
// blockstore when needed, concretely to send the payload to the provider.
// This abstraction allows the caller to provider any blockstore implementation:
// a CARv2 file, an IPFS blockstore, or something else.
//
// They key is a payload CID because this is the unique top-level key of a
// client-side data import.
type BlockstoreAccessor interface {
	Get(PayloadCID) (bstore.Blockstore, error)
	Done(PayloadCID) error
}

// ClientSubscriber is a callback that is run when events are emitted on a StorageClient
type ClientSubscriber func(event ClientEvent, deal ClientDeal)

// StorageClient is a client interface for making storage deals with a StorageProvider
type StorageClient interface {

	// Start initializes deal processing on a StorageClient and restarts
	// in progress deals
	Start(ctx context.Context) error

	// OnReady registers a listener for when the client comes on line
	OnReady(shared.ReadyFunc)

	// Stop ends deal processing on a StorageClient
	Stop() error

	// ListProviders queries chain state and returns active storage providers
	ListProviders(ctx context.Context) (<-chan StorageProviderInfo, error)

	// ListLocalDeals lists deals initiated by this storage client
	ListLocalDeals(ctx context.Context) ([]ClientDeal, error)

	// GetLocalDeal lists deals that are in progress or rejected
	GetLocalDeal(ctx context.Context, cid cid.Cid) (ClientDeal, error)

	// GetAsk returns the current ask for a storage provider
	GetAsk(ctx context.Context, info StorageProviderInfo) (*StorageAsk, error)

	// GetProviderDealState queries a provider for the current state of a client's deal
	GetProviderDealState(ctx context.Context, proposalCid cid.Cid) (*ProviderDealState, error)

	// ProposeStorageDeal initiates deal negotiation with a Storage Provider
	ProposeStorageDeal(ctx context.Context, params ProposeStorageDealParams) (*ProposeStorageDealResult, error)

	// GetPaymentEscrow returns the current funds available for deal payment
	GetPaymentEscrow(ctx context.Context, addr address.Address) (Balance, error)

	// AddStorageCollateral adds storage collateral
	AddPaymentEscrow(ctx context.Context, addr address.Address, amount abi.TokenAmount) error

	// SubscribeToEvents listens for events that happen related to storage deals on a provider
	SubscribeToEvents(subscriber ClientSubscriber) shared.Unsubscribe
}

Storage Market On-Chain Components

Storage Deals

There are two types of deals in Filecoin markets, storage deals and retrieval deals. Storage deals are recorded on the blockchain and enforced by the protocol. Retrieval deals are off chain and enabled by a micropayment channel between transacting parties (see Retrieval Market for more information).

The lifecycle of a Storage Deal touches several major subsystems, components, and protocols in Filecoin.

This section describes the storage deal data type and provides a technical outline of the deal flow in terms of how all the components interact with each other, as well as the functions they call. For more detail on the off-chain parts of the storage market see the Storage Market section.

Data Types

package market

import (
	"bytes"
	"encoding/json"
	"fmt"
	"io"
	"unicode/utf8"

	addr "github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	acrypto "github.com/filecoin-project/go-state-types/crypto"
	market0 "github.com/filecoin-project/specs-actors/actors/builtin/market"
	"github.com/ipfs/go-cid"
	cbg "github.com/whyrusleeping/cbor-gen"
	"golang.org/x/xerrors"
)

//var PieceCIDPrefix = cid.Prefix{
//	Version:  1,
//	Codec:    cid.FilCommitmentUnsealed,
//	MhType:   mh.SHA2_256_TRUNC254_PADDED,
//	MhLength: 32,
//}
var PieceCIDPrefix = market0.PieceCIDPrefix

// The DealLabel is a kinded union of string or byte slice.
// It serializes to a CBOR string or CBOR byte string depending on which form it takes.
// The zero value is serialized as an empty CBOR string (maj type 3).
type DealLabel struct {
	bs        []byte
	notString bool
}

// Zero value of DealLabel is canonical EmptyDealLabel
var EmptyDealLabel = DealLabel{}

func NewLabelFromString(s string) (DealLabel, error) {
	if len(s) > DealMaxLabelSize {
		return EmptyDealLabel, xerrors.Errorf("provided string is too large to be a label (%d), max length (%d)", len(s), DealMaxLabelSize)
	}
	if !utf8.ValidString(s) {
		return EmptyDealLabel, xerrors.Errorf("provided string is invalid utf8")
	}
	return DealLabel{
		bs:        []byte(s),
		notString: false,
	}, nil
}

func NewLabelFromBytes(b []byte) (DealLabel, error) {
	if len(b) > DealMaxLabelSize {
		return EmptyDealLabel, xerrors.Errorf("provided bytes are too large to be a label (%d), max length (%d)", len(b), DealMaxLabelSize)
	}

	return DealLabel{
		bs:        b,
		notString: true,
	}, nil
}

func (label DealLabel) IsString() bool {
	return !label.notString
}

func (label DealLabel) IsBytes() bool {
	return label.notString
}

func (label DealLabel) ToString() (string, error) {
	if !label.IsString() {
		return "", xerrors.Errorf("label is not string")
	}

	return string(label.bs), nil
}

func (label DealLabel) ToBytes() ([]byte, error) {
	if !label.IsBytes() {
		return nil, xerrors.Errorf("label is not bytes")
	}
	return label.bs, nil
}

func (label DealLabel) Length() int {
	return len(label.bs)
}

func (l DealLabel) Equals(o DealLabel) bool {
	return bytes.Equal(l.bs, o.bs) && l.notString == o.notString
}

func (label *DealLabel) MarshalCBOR(w io.Writer) error {
	scratch := make([]byte, 9)

	// nil *DealLabel counts as EmptyLabel
	// on chain structures should never have a pointer to a DealLabel but the case is included for completeness
	if label == nil {
		if err := cbg.WriteMajorTypeHeaderBuf(scratch, w, cbg.MajTextString, 0); err != nil {
			return err
		}
		_, err := io.WriteString(w, string(""))
		return err
	}
	if len(label.bs) > cbg.ByteArrayMaxLen {
		return xerrors.Errorf("label is too long to marshal (%d), max allowed (%d)", len(label.bs), cbg.ByteArrayMaxLen)
	}

	majorType := byte(cbg.MajByteString)
	if label.IsString() {
		majorType = cbg.MajTextString
	}

	if err := cbg.WriteMajorTypeHeaderBuf(scratch, w, majorType, uint64(len(label.bs))); err != nil {
		return err
	}
	_, err := w.Write(label.bs)
	return err
}

func (label *DealLabel) UnmarshalCBOR(br io.Reader) error {
	if label == nil {
		return xerrors.Errorf("cannot unmarshal into nil pointer")
	}

	// reset fields
	label.bs = nil

	scratch := make([]byte, 8)

	maj, length, err := cbg.CborReadHeaderBuf(br, scratch)
	if err != nil {
		return err
	}
	if maj != cbg.MajTextString && maj != cbg.MajByteString {
		return fmt.Errorf("unexpected major tag (%d) when unmarshaling DealLabel: only textString (%d) or byteString (%d) expected", maj, cbg.MajTextString, cbg.MajByteString)
	}
	if length > cbg.ByteArrayMaxLen {
		return fmt.Errorf("label was too long (%d), max allowed (%d)", length, cbg.ByteArrayMaxLen)
	}
	buf := make([]byte, length)
	_, err = io.ReadAtLeast(br, buf, int(length))
	if err != nil {
		return err
	}
	label.bs = buf
	label.notString = maj != cbg.MajTextString
	if !label.notString && !utf8.ValidString(string(buf)) {
		return fmt.Errorf("label string not valid utf8")
	}

	return nil
}

func (label *DealLabel) MarshalJSON() ([]byte, error) {
	str, err := label.ToString()
	if err != nil {
		return nil, xerrors.Errorf("can only marshal strings: %w", err)
	}

	return json.Marshal(str)
}

func (label *DealLabel) UnmarshalJSON(b []byte) error {
	var str string
	if err := json.Unmarshal(b, &str); err != nil {
		return xerrors.Errorf("failed to unmarshal string: %w", err)
	}

	newLabel, err := NewLabelFromString(str)
	if err != nil {
		return xerrors.Errorf("failed to create label from string: %w", err)
	}

	*label = newLabel
	return nil
}

// Note: Deal Collateral is only released and returned to clients and miners
// when the storage deal stops counting towards power. In the current iteration,
// it will be released when the sector containing the storage deals expires,
// even though some storage deals can expire earlier than the sector does.
// Collaterals are denominated in PerEpoch to incur a cost for self dealing or
// minimal deals that last for a long time.
// Note: ClientCollateralPerEpoch may not be needed and removed pending future confirmation.
// There will be a Minimum value for both client and provider deal collateral.
type DealProposal struct {
	PieceCID     cid.Cid `checked:"true"` // Checked in validateDeal, CommP
	PieceSize    abi.PaddedPieceSize
	VerifiedDeal bool
	Client       addr.Address
	Provider     addr.Address

	// Label is an arbitrary client chosen label to apply to the deal
	Label DealLabel

	// Nominal start epoch. Deal payment is linear between StartEpoch and EndEpoch,
	// with total amount StoragePricePerEpoch * (EndEpoch - StartEpoch).
	// Storage deal must appear in a sealed (proven) sector no later than StartEpoch,
	// otherwise it is invalid.
	StartEpoch           abi.ChainEpoch
	EndEpoch             abi.ChainEpoch
	StoragePricePerEpoch abi.TokenAmount

	ProviderCollateral abi.TokenAmount
	ClientCollateral   abi.TokenAmount
}

// ClientDealProposal is a DealProposal signed by a client
type ClientDealProposal struct {
	Proposal        DealProposal
	ClientSignature acrypto.Signature
}

func (p *DealProposal) Duration() abi.ChainEpoch {
	return p.EndEpoch - p.StartEpoch
}

func (p *DealProposal) TotalStorageFee() abi.TokenAmount {
	return big.Mul(p.StoragePricePerEpoch, big.NewInt(int64(p.Duration())))
}

func (p *DealProposal) ClientBalanceRequirement() abi.TokenAmount {
	return big.Add(p.ClientCollateral, p.TotalStorageFee())
}

func (p *DealProposal) ProviderBalanceRequirement() abi.TokenAmount {
	return p.ProviderCollateral
}

func (p *DealProposal) Cid() (cid.Cid, error) {
	buf := new(bytes.Buffer)
	if err := p.MarshalCBOR(buf); err != nil {
		return cid.Undef, err
	}
	return abi.CidBuilder.Sum(buf.Bytes())
}

Storage Market Actor

The StorageMarketActor is responsible for processing and managing on-chain deals. This is also the entry point of all storage deals and data into the system. It maintains a mapping of StorageDealID to StorageDeal and keeps track of locked balances of StorageClient and StorageProvider. When a deal is posted on chain through the StorageMarketActor, it will first check if both transacting parties have sufficient balances locked up and include the deal on chain.

StorageMarketActor implementation
package market

import (
	"sort"

	addr "github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-bitfield"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/cbor"
	"github.com/filecoin-project/go-state-types/exitcode"
	rtt "github.com/filecoin-project/go-state-types/rt"
	market0 "github.com/filecoin-project/specs-actors/actors/builtin/market"
	market3 "github.com/filecoin-project/specs-actors/v3/actors/builtin/market"
	market5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/market"
	market6 "github.com/filecoin-project/specs-actors/v6/actors/builtin/market"
	"github.com/ipfs/go-cid"
	cbg "github.com/whyrusleeping/cbor-gen"
	"golang.org/x/xerrors"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/builtin/power"
	"github.com/filecoin-project/specs-actors/v8/actors/builtin/reward"
	"github.com/filecoin-project/specs-actors/v8/actors/builtin/verifreg"
	"github.com/filecoin-project/specs-actors/v8/actors/runtime"
	"github.com/filecoin-project/specs-actors/v8/actors/util/adt"
)

type Actor struct{}

type Runtime = runtime.Runtime

func (a Actor) Exports() []interface{} {
	return []interface{}{
		builtin.MethodConstructor: a.Constructor,
		2:                         a.AddBalance,
		3:                         a.WithdrawBalance,
		4:                         a.PublishStorageDeals,
		5:                         a.VerifyDealsForActivation,
		6:                         a.ActivateDeals,
		7:                         a.OnMinerSectorsTerminate,
		8:                         a.ComputeDataCommitment,
		9:                         a.CronTick,
	}
}

func (a Actor) Code() cid.Cid {
	return builtin.StorageMarketActorCodeID
}

func (a Actor) IsSingleton() bool {
	return true
}

func (a Actor) State() cbor.Er {
	return new(State)
}

var _ runtime.VMActor = Actor{}

////////////////////////////////////////////////////////////////////////////////
// Actor methods
////////////////////////////////////////////////////////////////////////////////

func (a Actor) Constructor(rt Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)

	st, err := ConstructState(adt.AsStore(rt))
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to create state")
	rt.StateCreate(st)
	return nil
}

//type WithdrawBalanceParams struct {
//	ProviderOrClientAddress addr.Address
//	Amount                  abi.TokenAmount
//}
type WithdrawBalanceParams = market0.WithdrawBalanceParams

// Attempt to withdraw the specified amount from the balance held in escrow.
// If less than the specified amount is available, yields the entire available balance.
// Returns the amount withdrawn.
func (a Actor) WithdrawBalance(rt Runtime, params *WithdrawBalanceParams) *abi.TokenAmount {
	if params.Amount.LessThan(big.Zero()) {
		rt.Abortf(exitcode.ErrIllegalArgument, "negative amount %v", params.Amount)
	}

	nominal, recipient, approvedCallers := escrowAddress(rt, params.ProviderOrClientAddress)
	// for providers -> only corresponding owner or worker can withdraw
	// for clients -> only the client i.e the recipient can withdraw
	rt.ValidateImmediateCallerIs(approvedCallers...)

	amountExtracted := abi.NewTokenAmount(0)
	var st State
	rt.StateTransaction(&st, func() {
		msm, err := st.mutator(adt.AsStore(rt)).withEscrowTable(WritePermission).
			withLockedTable(WritePermission).build()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load state")

		// The withdrawable amount might be slightly less than nominal
		// depending on whether or not all relevant entries have been processed
		// by cron
		minBalance, err := msm.lockedTable.Get(nominal)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get locked balance")

		ex, err := msm.escrowTable.SubtractWithMinimum(nominal, params.Amount, minBalance)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to subtract from escrow table")

		err = msm.commitState()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush state")

		amountExtracted = ex
	})
	code := rt.Send(recipient, builtin.MethodSend, nil, amountExtracted, &builtin.Discard{})
	builtin.RequireSuccess(rt, code, "failed to send funds")
	return &amountExtracted
}

// Deposits the received value into the balance held in escrow.
func (a Actor) AddBalance(rt Runtime, providerOrClientAddress *addr.Address) *abi.EmptyValue {
	msgValue := rt.ValueReceived()
	builtin.RequireParam(rt, msgValue.GreaterThan(big.Zero()), "balance to add must be greater than zero")

	// only signing parties can add balance for client AND provider.
	rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)

	nominal, _, _ := escrowAddress(rt, *providerOrClientAddress)

	var st State
	rt.StateTransaction(&st, func() {
		msm, err := st.mutator(adt.AsStore(rt)).withEscrowTable(WritePermission).
			withLockedTable(WritePermission).build()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load state")

		err = msm.escrowTable.Add(nominal, msgValue)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to add balance to escrow table")
		err = msm.commitState()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush state")
	})
	return nil
}

type PublishStorageDealsParams struct {
	Deals []ClientDealProposal
}

//type PublishStorageDealsReturn struct {
//	IDs        []abi.DealID
//	ValidDeals bitfield.BitField
//}
type PublishStorageDealsReturn = market6.PublishStorageDealsReturn

// Publish a new set of storage deals (not yet included in a sector).
func (a Actor) PublishStorageDeals(rt Runtime, params *PublishStorageDealsParams) *PublishStorageDealsReturn {
	// Deal message must have a From field identical to the provider of all the deals.
	// This allows us to retain and verify only the client's signature in each deal proposal itself.
	rt.ValidateImmediateCallerType(builtin.CallerTypesSignable...)
	if len(params.Deals) == 0 {
		rt.Abortf(exitcode.ErrIllegalArgument, "empty deals parameter")
	}

	// All deals should have the same provider so get worker once
	providerRaw := params.Deals[0].Proposal.Provider
	provider, ok := rt.ResolveAddress(providerRaw)
	if !ok {
		rt.Abortf(exitcode.ErrNotFound, "failed to resolve provider address %v", providerRaw)
	}

	codeID, ok := rt.GetActorCodeCID(provider)
	builtin.RequireParam(rt, ok, "no codeId for address %v", provider)
	if !codeID.Equals(builtin.StorageMinerActorCodeID) {
		rt.Abortf(exitcode.ErrIllegalArgument, "deal provider is not a StorageMinerActor")
	}

	caller := rt.Caller()
	_, worker, controllers := builtin.RequestMinerControlAddrs(rt, provider)
	callerOk := caller == worker
	for _, controller := range controllers {
		if callerOk {
			break
		}
		callerOk = caller == controller
	}
	if !callerOk {
		rt.Abortf(exitcode.ErrForbidden, "caller %v is not worker or control address of provider %v", caller, provider)
	}
	resolvedAddrs := make(map[addr.Address]addr.Address, len(params.Deals))
	baselinePower := requestCurrentBaselinePower(rt)
	networkRawPower, networkQAPower := requestCurrentNetworkPower(rt)

	// Drop invalid deals
	var st State
	proposalCidLookup := make(map[cid.Cid]struct{})
	validProposalCids := make([]cid.Cid, 0)
	validDeals := make([]ClientDealProposal, 0, len(params.Deals))
	totalClientLockup := make(map[addr.Address]abi.TokenAmount)
	totalProviderLockup := abi.NewTokenAmount(0)

	validInputBf := bitfield.New()
	rt.StateReadonly(&st)
	msm, err := st.mutator(adt.AsStore(rt)).withPendingProposals(ReadOnlyPermission).
		withEscrowTable(ReadOnlyPermission).withLockedTable(ReadOnlyPermission).build()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load state")
	for di, deal := range params.Deals {
		/*
			drop malformed deals
		*/
		if err := validateDeal(rt, deal, networkRawPower, networkQAPower, baselinePower); err != nil {
			rt.Log(rtt.INFO, "invalid deal %d: %s", di, err)
			continue
		}
		if deal.Proposal.Provider != provider && deal.Proposal.Provider != providerRaw {
			rt.Log(rtt.INFO, "invalid deal %d: cannot publish deals from multiple providers in one batch", di)
			continue
		}
		client, ok := rt.ResolveAddress(deal.Proposal.Client)
		if !ok {
			rt.Log(rtt.INFO, "invalid deal %d: failed to resolve proposal.Client address %v for deal ", di, deal.Proposal.Client)
			continue
		}

		/*
			drop deals with insufficient lock up to cover costs
		*/
		if _, ok := totalClientLockup[client]; !ok {
			totalClientLockup[client] = abi.NewTokenAmount(0)
		}
		totalClientLockup[client] = big.Sum(totalClientLockup[client], deal.Proposal.ClientBalanceRequirement())
		clientBalanceOk, err := msm.balanceCovered(client, totalClientLockup[client])
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to check client balance coverage")
		if !clientBalanceOk {
			rt.Log(rtt.INFO, "invalid deal: %d: insufficient client funds to cover proposal cost", di)
			continue
		}
		totalProviderLockup = big.Sum(totalProviderLockup, deal.Proposal.ProviderCollateral)
		providerBalanceOk, err := msm.balanceCovered(provider, totalProviderLockup)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to check provider balance coverage")
		if !providerBalanceOk {
			rt.Log(rtt.INFO, "invalid deal: %d: insufficient provider funds to cover proposal cost", di)
			continue
		}

		/*
			drop duplicate deals
		*/
		// Normalise provider and client addresses in the proposal stored on chain.
		// Must happen after signature verification and before taking cid.
		deal.Proposal.Provider = provider
		resolvedAddrs[deal.Proposal.Client] = client
		deal.Proposal.Client = client

		pcid, err := deal.Proposal.Cid()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to take cid of proposal %d", di)

		// check proposalCids for duplication within message batch
		// check state PendingProposals for duplication across messages
		duplicateInState, err := msm.pendingDeals.Has(abi.CidKey(pcid))
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to check for existence of deal proposal")
		_, duplicateInMessage := proposalCidLookup[pcid]
		if duplicateInState || duplicateInMessage {
			rt.Log(rtt.INFO, "invalid deal %d: cannot publish duplicate deal proposal %s", di)
			continue
		}

		/*
			check VerifiedClient allowed cap and deduct PieceSize from cap
			drop deals with a DealSize that cannot be fully covered by VerifiedClient's available DataCap
		*/
		if deal.Proposal.VerifiedDeal {
			code := rt.Send(
				builtin.VerifiedRegistryActorAddr,
				builtin.MethodsVerifiedRegistry.UseBytes,
				&verifreg.UseBytesParams{
					Address:  client,
					DealSize: big.NewIntUnsigned(uint64(deal.Proposal.PieceSize)),
				},
				abi.NewTokenAmount(0),
				&builtin.Discard{},
			)
			if code.IsError() {
				rt.Log(rtt.INFO, "invalid deal %d: failed to acquire datacap exitcode: %d", di, code)
				continue
			}
		}

		// update valid deal state
		proposalCidLookup[pcid] = struct{}{}
		validProposalCids = append(validProposalCids, pcid)
		validDeals = append(validDeals, deal)
		validInputBf.Set(uint64(di))
	}

	validDealCount, err := validInputBf.Count()
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to count valid deals in bitfield")
	builtin.RequirePredicate(rt, len(validDeals) == len(validProposalCids), exitcode.ErrIllegalState,
		"%d valid deals but %d valid proposal cids", len(validDeals), len(validProposalCids))
	builtin.RequirePredicate(rt, uint64(len(validDeals)) == validDealCount, exitcode.ErrIllegalState,
		"%d valid deals but validDealCount=%d", len(validDeals), validDealCount)
	builtin.RequireParam(rt, validDealCount > 0, "All deal proposals invalid")

	var newDealIds []abi.DealID
	rt.StateTransaction(&st, func() {
		msm, err := st.mutator(adt.AsStore(rt)).withPendingProposals(WritePermission).
			withDealProposals(WritePermission).withDealsByEpoch(WritePermission).withEscrowTable(WritePermission).
			withLockedTable(WritePermission).build()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load state")

		// All storage dealProposals will be added in an atomic transaction; this operation will be unrolled if any of them fails.
		// This should only fail on programmer error because all expected invalid conditions should be filtered in the first set of checks.
		for vdi, validDeal := range validDeals {
			err := msm.lockClientAndProviderBalances(&validDeal.Proposal)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to lock balance")

			id := msm.generateStorageDealID()

			pcid := validProposalCids[vdi]
			err = msm.pendingDeals.Put(abi.CidKey(pcid))
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to set pending deal")

			err = msm.dealProposals.Set(id, &validDeal.Proposal)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to set deal")

			// We randomize the first epoch for when the deal will be processed so an attacker isn't able to
			// schedule too many deals for the same tick.
			processEpoch := GenRandNextEpoch(validDeal.Proposal.StartEpoch, id)

			err = msm.dealsByEpoch.Put(processEpoch, id)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to set deal ops by epoch")

			newDealIds = append(newDealIds, id)
		}
		err = msm.commitState()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush state")
	})

	return &PublishStorageDealsReturn{
		IDs:        newDealIds,
		ValidDeals: validInputBf,
	}
}

// Changed in v3:
// - Array of sectors rather than just one
// - Removed SectorStart (which is unknown at call time)
//type VerifyDealsForActivationParams struct {
//	Sectors []SectorDeals
//}
type VerifyDealsForActivationParams = market3.VerifyDealsForActivationParams

//type SectorDeals struct {
//	SectorExpiry abi.ChainEpoch
//	DealIDs      []abi.DealID
//}
type SectorDeals = market3.SectorDeals

// Changed in v3:
// - Array of sectors weights
//type VerifyDealsForActivationReturn struct {
//	Sectors []SectorWeights
//}
type VerifyDealsForActivationReturn = market3.VerifyDealsForActivationReturn

//type SectorWeights struct {
//	DealSpace          uint64         // Total space in bytes of submitted deals.
//	DealWeight         abi.DealWeight // Total space*time of submitted deals.
//	VerifiedDealWeight abi.DealWeight // Total space*time of submitted verified deals.
//}
type SectorWeights = market3.SectorWeights

// Computes the weight of deals proposed for inclusion in a number of sectors.
// Deal weight is defined as the sum, over all deals in the set, of the product of deal size and duration.
//
// This method performs some light validation on the way in order to fail early if deals can be
// determined to be invalid for the proposed sector properties.
// Full deal validation is deferred to deal activation since it depends on the activation epoch.
func (a Actor) VerifyDealsForActivation(rt Runtime, params *VerifyDealsForActivationParams) *VerifyDealsForActivationReturn {
	rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
	minerAddr := rt.Caller()
	currEpoch := rt.CurrEpoch()

	var st State
	rt.StateReadonly(&st)
	store := adt.AsStore(rt)

	proposals, err := AsDealProposalArray(store, st.Proposals)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deal proposals")

	weights := make([]SectorWeights, len(params.Sectors))
	for i, sector := range params.Sectors {
		// Pass the current epoch as the activation epoch for validation.
		// The sector activation epoch isn't yet known, but it's still more helpful to fail now if the deal
		// is so late that a sector activating now couldn't include it.
		dealWeight, verifiedWeight, dealSpace, err := validateAndComputeDealWeight(proposals, sector.DealIDs, minerAddr, sector.SectorExpiry, currEpoch)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to validate deal proposals for activation")

		weights[i] = SectorWeights{
			DealSpace:          dealSpace,
			DealWeight:         dealWeight,
			VerifiedDealWeight: verifiedWeight,
		}
	}

	return &VerifyDealsForActivationReturn{
		Sectors: weights,
	}
}

//type ActivateDealsParams struct {
//	DealIDs      []abi.DealID
//	SectorExpiry abi.ChainEpoch
//}
type ActivateDealsParams = market0.ActivateDealsParams

// Verify that a given set of storage deals is valid for a sector currently being ProveCommitted,
// update the market's internal state accordingly.
func (a Actor) ActivateDeals(rt Runtime, params *ActivateDealsParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
	minerAddr := rt.Caller()
	currEpoch := rt.CurrEpoch()

	var st State
	store := adt.AsStore(rt)

	// Update deal dealStates.
	rt.StateTransaction(&st, func() {
		_, _, _, err := ValidateDealsForActivation(&st, store, params.DealIDs, minerAddr, params.SectorExpiry, currEpoch)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to validate dealProposals for activation")

		msm, err := st.mutator(adt.AsStore(rt)).withDealStates(WritePermission).
			withPendingProposals(ReadOnlyPermission).withDealProposals(ReadOnlyPermission).build()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load state")

		for _, dealID := range params.DealIDs {
			// This construction could be replaced with a single "update deal state" state method, possibly batched
			// over all deal ids at once.
			_, found, err := msm.dealStates.Get(dealID)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get state for dealId %d", dealID)
			if found {
				rt.Abortf(exitcode.ErrIllegalArgument, "deal %d already included in another sector", dealID)
			}

			proposal, err := getDealProposal(msm.dealProposals, dealID)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get dealId %d", dealID)

			propc, err := proposal.Cid()
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to calculate proposal CID")

			has, err := msm.pendingDeals.Has(abi.CidKey(propc))
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get pending proposal %v", propc)

			if !has {
				rt.Abortf(exitcode.ErrIllegalState, "tried to activate deal that was not in the pending set (%s)", propc)
			}

			err = msm.dealStates.Set(dealID, &DealState{
				SectorStartEpoch: currEpoch,
				LastUpdatedEpoch: EpochUndefined,
				SlashEpoch:       EpochUndefined,
			})
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to set deal state %d", dealID)
		}

		err = msm.commitState()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush state")
	})

	return nil
}

//type SectorDataSpec struct {
//	DealIDs    []abi.DealID
//	SectorType abi.RegisteredSealProof
//}
type SectorDataSpec = market5.SectorDataSpec

//type ComputeDataCommitmentParams struct {
//	Inputs []*SectorDataSpec
//}
type ComputeDataCommitmentParams = market5.ComputeDataCommitmentParams

//type ComputeDataCommitmentReturn struct {
//	CommDs []cbg.CborCid
//}
type ComputeDataCommitmentReturn = market5.ComputeDataCommitmentReturn

func (a Actor) ComputeDataCommitment(rt Runtime, params *ComputeDataCommitmentParams) *ComputeDataCommitmentReturn {
	rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)

	var st State
	rt.StateReadonly(&st)
	proposals, err := AsDealProposalArray(adt.AsStore(rt), st.Proposals)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deal dealProposals")
	commDs := make([]cbg.CborCid, len(params.Inputs))
	for i, commInput := range params.Inputs {
		pieces := make([]abi.PieceInfo, 0)
		for _, dealID := range commInput.DealIDs {
			deal, err := getDealProposal(proposals, dealID)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get dealId %d", dealID)

			pieces = append(pieces, abi.PieceInfo{
				PieceCID: deal.PieceCID,
				Size:     deal.PieceSize,
			})
		}
		commD, err := rt.ComputeUnsealedSectorCID(commInput.SectorType, pieces)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to compute unsealed sectorCID: %s", err)
		commDs[i] = (cbg.CborCid)(commD)
	}
	return &ComputeDataCommitmentReturn{
		CommDs: commDs,
	}
}

//type OnMinerSectorsTerminateParams struct {
//	Epoch   abi.ChainEpoch
//	DealIDs []abi.DealID
//}
type OnMinerSectorsTerminateParams = market0.OnMinerSectorsTerminateParams

// Terminate a set of deals in response to their containing sector being terminated.
// Slash provider collateral, refund client collateral, and refund partial unpaid escrow
// amount to client.
func (a Actor) OnMinerSectorsTerminate(rt Runtime, params *OnMinerSectorsTerminateParams) *abi.EmptyValue {
	rt.ValidateImmediateCallerType(builtin.StorageMinerActorCodeID)
	minerAddr := rt.Caller()

	var st State
	rt.StateTransaction(&st, func() {
		msm, err := st.mutator(adt.AsStore(rt)).withDealStates(WritePermission).
			withDealProposals(ReadOnlyPermission).build()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load deal state")

		for _, dealID := range params.DealIDs {
			deal, found, err := msm.dealProposals.Get(dealID)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get deal proposal %v", dealID)
			// The deal may have expired and been deleted before the sector is terminated.
			// Log the dealID for the dealProposal and continue execution for other deals
			if !found {
				rt.Log(rtt.INFO, "couldn't find deal %d", dealID)
				continue
			}
			builtin.RequireState(rt, deal.Provider == minerAddr, "caller %v is not the provider %v of deal %v",
				minerAddr, deal.Provider, dealID)

			// do not slash expired deals
			if deal.EndEpoch <= params.Epoch {
				rt.Log(rtt.INFO, "deal %d expired, not slashing", dealID)
				continue
			}

			state, found, err := msm.dealStates.Get(dealID)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get deal state %v", dealID)
			if !found {
				// A deal with a proposal but no state is not activated, but then it should not be
				// part of a sector that is terminating.
				rt.Abortf(exitcode.ErrIllegalArgument, "no state for deal %v", dealID)
			}

			// if a deal is already slashed, we don't need to do anything here.
			if state.SlashEpoch != EpochUndefined {
				rt.Log(rtt.INFO, "deal %d already slashed", dealID)
				continue
			}

			// mark the deal for slashing here.
			// actual releasing of locked funds for the client and slashing of provider collateral happens in CronTick.
			state.SlashEpoch = params.Epoch

			err = msm.dealStates.Set(dealID, state)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to set deal state %v", dealID)
		}

		err = msm.commitState()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush state")
	})
	return nil
}

func (a Actor) CronTick(rt Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
	rt.ValidateImmediateCallerIs(builtin.CronActorAddr)
	amountSlashed := big.Zero()

	var timedOutVerifiedDeals []*DealProposal

	var st State
	rt.StateTransaction(&st, func() {
		updatesNeeded := make(map[abi.ChainEpoch][]abi.DealID)

		msm, err := st.mutator(adt.AsStore(rt)).withDealStates(WritePermission).
			withLockedTable(WritePermission).withEscrowTable(WritePermission).withDealsByEpoch(WritePermission).
			withDealProposals(WritePermission).withPendingProposals(WritePermission).build()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load state")

		for i := st.LastCron + 1; i <= rt.CurrEpoch(); i++ {
			err = msm.dealsByEpoch.ForEach(i, func(dealID abi.DealID) error {
				deal, err := getDealProposal(msm.dealProposals, dealID)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get dealId %d", dealID)

				dcid, err := deal.Cid()
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to calculate CID for proposal %v", dealID)

				state, found, err := msm.dealStates.Get(dealID)
				builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to get deal state")

				// deal has been published but not activated yet -> terminate it as it has timed out
				if !found {
					// Not yet appeared in proven sector; check for timeout.
					builtin.RequireState(rt, rt.CurrEpoch() >= deal.StartEpoch, "deal %d processed before start epoch %d",
						dealID, deal.StartEpoch)

					slashed := msm.processDealInitTimedOut(rt, deal)
					if !slashed.IsZero() {
						amountSlashed = big.Add(amountSlashed, slashed)
					}
					if deal.VerifiedDeal {
						timedOutVerifiedDeals = append(timedOutVerifiedDeals, deal)
					}

					// Delete the proposal (but not state, which doesn't exist).
					err = msm.dealProposals.Delete(dealID)
					builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete deal proposal %d", dealID)

					err = msm.pendingDeals.Delete(abi.CidKey(dcid))
					builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete pending proposal %d (%v)", dealID, dcid)
					return nil
				}

				// if this is the first cron tick for the deal, it should be in the pending state.
				if state.LastUpdatedEpoch == EpochUndefined {
					pdErr := msm.pendingDeals.Delete(abi.CidKey(dcid))
					builtin.RequireNoErr(rt, pdErr, exitcode.ErrIllegalState, "failed to delete pending proposal %v", dcid)
				}

				slashAmount, nextEpoch, removeDeal := msm.updatePendingDealState(rt, state, deal, rt.CurrEpoch())
				builtin.RequireState(rt, slashAmount.GreaterThanEqual(big.Zero()), "computed negative slash amount %v for deal %d", slashAmount, dealID)

				if removeDeal {
					builtin.RequireState(rt, nextEpoch == EpochUndefined, "removed deal %d should have no scheduled epoch (got %d)", dealID, nextEpoch)
					amountSlashed = big.Add(amountSlashed, slashAmount)

					// Delete proposal and state simultaneously.
					err = msm.dealStates.Delete(dealID)
					builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete deal state %d", dealID)
					err = msm.dealProposals.Delete(dealID)
					builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete deal proposal %d", dealID)
				} else {
					builtin.RequireState(rt, nextEpoch > rt.CurrEpoch(), "continuing deal %d next epoch %d should be in future", dealID, nextEpoch)
					builtin.RequireState(rt, slashAmount.IsZero(), "continuing deal %d should not be slashed", dealID)

					// Update deal's LastUpdatedEpoch in DealStates
					state.LastUpdatedEpoch = rt.CurrEpoch()
					err = msm.dealStates.Set(dealID, state)
					builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to set deal state")

					updatesNeeded[nextEpoch] = append(updatesNeeded[nextEpoch], dealID)
				}

				return nil
			})
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to iterate deal ops")

			err = msm.dealsByEpoch.RemoveAll(i)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to delete deal ops for epoch %v", i)
		}

		// Iterate changes in sorted order to ensure that loads/stores
		// are deterministic. Otherwise, we could end up charging an
		// inconsistent amount of gas.
		changedEpochs := make([]abi.ChainEpoch, 0, len(updatesNeeded))
		for epoch := range updatesNeeded { //nolint:nomaprange
			changedEpochs = append(changedEpochs, epoch)
		}

		sort.Slice(changedEpochs, func(i, j int) bool { return changedEpochs[i] < changedEpochs[j] })

		for _, epoch := range changedEpochs {
			err = msm.dealsByEpoch.PutMany(epoch, updatesNeeded[epoch])
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to reinsert deal IDs for epoch %v", epoch)
		}

		st.LastCron = rt.CurrEpoch()

		err = msm.commitState()
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to flush state")
	})

	for _, d := range timedOutVerifiedDeals {
		code := rt.Send(
			builtin.VerifiedRegistryActorAddr,
			builtin.MethodsVerifiedRegistry.RestoreBytes,
			&verifreg.RestoreBytesParams{
				Address:  d.Client,
				DealSize: big.NewIntUnsigned(uint64(d.PieceSize)),
			},
			abi.NewTokenAmount(0),
			&builtin.Discard{},
		)

		if !code.IsSuccess() {
			rt.Log(rtt.ERROR, "failed to send RestoreBytes call to the VerifReg actor for timed-out verified deal, client: %s, dealSize: %v, "+
				"provider: %v, got code %v", d.Client, d.PieceSize, d.Provider, code)
		}
	}

	if !amountSlashed.IsZero() {
		e := rt.Send(builtin.BurntFundsActorAddr, builtin.MethodSend, nil, amountSlashed, &builtin.Discard{})
		builtin.RequireSuccess(rt, e, "expected send to burnt funds actor to succeed")
	}

	return nil
}

func GenRandNextEpoch(startEpoch abi.ChainEpoch, dealID abi.DealID) abi.ChainEpoch {
	offset := abi.ChainEpoch(uint64(dealID) % uint64(DealUpdatesInterval))
	q := builtin.NewQuantSpec(DealUpdatesInterval, 0)
	prevDay := q.QuantizeDown(startEpoch)
	if prevDay+offset >= startEpoch {
		return prevDay + offset
	}
	nextDay := q.QuantizeUp(startEpoch)
	return nextDay + offset
}

//
// Exported functions
//

// Validates a collection of deal dealProposals for activation, and returns their combined weight,
// split into regular deal weight and verified deal weight.
func ValidateDealsForActivation(
	st *State, store adt.Store, dealIDs []abi.DealID, minerAddr addr.Address, sectorExpiry, currEpoch abi.ChainEpoch,
) (big.Int, big.Int, uint64, error) {
	proposals, err := AsDealProposalArray(store, st.Proposals)
	if err != nil {
		return big.Int{}, big.Int{}, 0, xerrors.Errorf("failed to load dealProposals: %w", err)
	}

	return validateAndComputeDealWeight(proposals, dealIDs, minerAddr, sectorExpiry, currEpoch)
}

////////////////////////////////////////////////////////////////////////////////
// Checks
////////////////////////////////////////////////////////////////////////////////

func validateAndComputeDealWeight(proposals *DealArray, dealIDs []abi.DealID, minerAddr addr.Address,
	sectorExpiry abi.ChainEpoch, sectorActivation abi.ChainEpoch) (big.Int, big.Int, uint64, error) {

	seenDealIDs := make(map[abi.DealID]struct{}, len(dealIDs))
	totalDealSpace := uint64(0)
	totalDealSpaceTime := big.Zero()
	totalVerifiedSpaceTime := big.Zero()
	for _, dealID := range dealIDs {
		// Make sure we don't double-count deals.
		if _, seen := seenDealIDs[dealID]; seen {
			return big.Int{}, big.Int{}, 0, exitcode.ErrIllegalArgument.Wrapf("deal ID %d present multiple times", dealID)
		}
		seenDealIDs[dealID] = struct{}{}

		proposal, found, err := proposals.Get(dealID)
		if err != nil {
			return big.Int{}, big.Int{}, 0, xerrors.Errorf("failed to load deal %d: %w", dealID, err)
		}
		if !found {
			return big.Int{}, big.Int{}, 0, exitcode.ErrNotFound.Wrapf("no such deal %d", dealID)
		}
		if err = validateDealCanActivate(proposal, minerAddr, sectorExpiry, sectorActivation); err != nil {
			return big.Int{}, big.Int{}, 0, xerrors.Errorf("cannot activate deal %d: %w", dealID, err)
		}

		// Compute deal weight
		totalDealSpace += uint64(proposal.PieceSize)
		dealSpaceTime := DealWeight(proposal)
		if proposal.VerifiedDeal {
			totalVerifiedSpaceTime = big.Add(totalVerifiedSpaceTime, dealSpaceTime)
		} else {
			totalDealSpaceTime = big.Add(totalDealSpaceTime, dealSpaceTime)
		}
	}
	return totalDealSpaceTime, totalVerifiedSpaceTime, totalDealSpace, nil
}

func validateDealCanActivate(proposal *DealProposal, minerAddr addr.Address, sectorExpiration, sectorActivation abi.ChainEpoch) error {
	if proposal.Provider != minerAddr {
		return exitcode.ErrForbidden.Wrapf("proposal has provider %v, must be %v", proposal.Provider, minerAddr)
	}
	if sectorActivation > proposal.StartEpoch {
		return exitcode.ErrIllegalArgument.Wrapf("proposal start epoch %d has already elapsed at %d", proposal.StartEpoch, sectorActivation)
	}
	if proposal.EndEpoch > sectorExpiration {
		return exitcode.ErrIllegalArgument.Wrapf("proposal expiration %d exceeds sector expiration %d", proposal.EndEpoch, sectorExpiration)
	}
	return nil
}

func validateDeal(rt Runtime, deal ClientDealProposal, networkRawPower, networkQAPower, baselinePower abi.StoragePower) error {
	if err := dealProposalIsInternallyValid(rt, deal); err != nil {
		return xerrors.Errorf("Invalid deal proposal %w", err)
	}

	proposal := deal.Proposal

	if proposal.Label.Length() > DealMaxLabelSize {
		return xerrors.Errorf("deal label can be at most %d bytes, is %d", DealMaxLabelSize, proposal.Label.Length())
	}

	if err := proposal.PieceSize.Validate(); err != nil {
		return xerrors.Errorf("proposal piece size is invalid: %w", err)
	}

	if !proposal.PieceCID.Defined() {
		return xerrors.Errorf("proposal PieceCid undefined")
	}

	if proposal.PieceCID.Prefix() != PieceCIDPrefix {
		return xerrors.Errorf("proposal PieceCID had wrong prefix")
	}

	if proposal.EndEpoch <= proposal.StartEpoch {
		return xerrors.Errorf("proposal end before proposal start")
	}

	if rt.CurrEpoch() > proposal.StartEpoch {
		return xerrors.Errorf("Deal start epoch has already elapsed")
	}

	minDuration, maxDuration := DealDurationBounds(proposal.PieceSize)
	if proposal.Duration() < minDuration || proposal.Duration() > maxDuration {
		return xerrors.Errorf("Deal duration out of bounds")
	}

	minPrice, maxPrice := DealPricePerEpochBounds(proposal.PieceSize, proposal.Duration())
	if proposal.StoragePricePerEpoch.LessThan(minPrice) || proposal.StoragePricePerEpoch.GreaterThan(maxPrice) {
		return xerrors.Errorf("Storage price out of bounds")
	}

	minProviderCollateral, maxProviderCollateral := DealProviderCollateralBounds(proposal.PieceSize, proposal.VerifiedDeal,
		networkRawPower, networkQAPower, baselinePower, rt.TotalFilCircSupply())
	if proposal.ProviderCollateral.LessThan(minProviderCollateral) || proposal.ProviderCollateral.GreaterThan(maxProviderCollateral) {
		return xerrors.Errorf("Provider collateral out of bounds")
	}

	minClientCollateral, maxClientCollateral := DealClientCollateralBounds(proposal.PieceSize, proposal.Duration())
	if proposal.ClientCollateral.LessThan(minClientCollateral) || proposal.ClientCollateral.GreaterThan(maxClientCollateral) {
		return xerrors.Errorf("Client collateral out of bounds")
	}
	return nil
}

//
// Helpers
//

// Resolves a provider or client address to the canonical form against which a balance should be held, and
// the designated recipient address of withdrawals (which is the same, for simple account parties).
func escrowAddress(rt Runtime, address addr.Address) (nominal addr.Address, recipient addr.Address, approved []addr.Address) {
	// Resolve the provided address to the canonical form against which the balance is held.
	nominal, ok := rt.ResolveAddress(address)
	if !ok {
		rt.Abortf(exitcode.ErrIllegalArgument, "failed to resolve address %v", address)
	}

	codeID, ok := rt.GetActorCodeCID(nominal)
	if !ok {
		rt.Abortf(exitcode.ErrIllegalArgument, "no code for address %v", nominal)
	}

	if codeID.Equals(builtin.StorageMinerActorCodeID) {
		// Storage miner actor entry; implied funds recipient is the associated owner address.
		ownerAddr, workerAddr, _ := builtin.RequestMinerControlAddrs(rt, nominal)
		return nominal, ownerAddr, []addr.Address{ownerAddr, workerAddr}
	}

	return nominal, nominal, []addr.Address{nominal}
}

func getDealProposal(proposals *DealArray, dealID abi.DealID) (*DealProposal, error) {
	proposal, found, err := proposals.Get(dealID)
	if err != nil {
		return nil, xerrors.Errorf("failed to load proposal: %w", err)
	}
	if !found {
		return nil, exitcode.ErrNotFound.Wrapf("no such deal %d", dealID)
	}

	return proposal, nil
}

// Requests the current epoch target block reward from the reward actor.
func requestCurrentBaselinePower(rt Runtime) abi.StoragePower {
	var ret reward.ThisEpochRewardReturn
	code := rt.Send(builtin.RewardActorAddr, builtin.MethodsReward.ThisEpochReward, nil, big.Zero(), &ret)
	builtin.RequireSuccess(rt, code, "failed to check epoch baseline power")
	return ret.ThisEpochBaselinePower
}

// Requests the current network total power and pledge from the power actor.
func requestCurrentNetworkPower(rt Runtime) (rawPower, qaPower abi.StoragePower) {
	var pwr power.CurrentTotalPowerReturn
	code := rt.Send(builtin.StoragePowerActorAddr, builtin.MethodsPower.CurrentTotalPower, nil, big.Zero(), &pwr)
	builtin.RequireSuccess(rt, code, "failed to check current power")
	return pwr.RawBytePower, pwr.QualityAdjPower
}
StorageMarketActorState implementation

Storage Market Actor Statuses

package market

import (
	"bytes"

	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/exitcode"
	"github.com/ipfs/go-cid"
	xerrors "golang.org/x/xerrors"

	"github.com/filecoin-project/specs-actors/v8/actors/builtin"
	"github.com/filecoin-project/specs-actors/v8/actors/util/adt"
)

const EpochUndefined = abi.ChainEpoch(-1)

// BalanceLockingReason is the reason behind locking an amount.
type BalanceLockingReason int

const (
	ClientCollateral BalanceLockingReason = iota
	ClientStorageFee
	ProviderCollateral
)

// Bitwidth of AMTs determined empirically from mutation patterns and projections of mainnet data.
const ProposalsAmtBitwidth = 5
const StatesAmtBitwidth = 6

type State struct {
	// Proposals are deals that have been proposed and not yet cleaned up after expiry or termination.
	Proposals cid.Cid // AMT[DealID]DealProposal
	// States contains state for deals that have been activated and not yet cleaned up after expiry or termination.
	// After expiration, the state exists until the proposal is cleaned up too.
	// Invariant: keys(States) ⊆ keys(Proposals).
	States cid.Cid // AMT[DealID]DealState

	// PendingProposals tracks dealProposals that have not yet reached their deal start date.
	// We track them here to ensure that miners can't publish the same deal proposal twice
	PendingProposals cid.Cid // Set[DealCid]

	// Total amount held in escrow, indexed by actor address (including both locked and unlocked amounts).
	EscrowTable cid.Cid // BalanceTable

	// Amount locked, indexed by actor address.
	// Note: the amounts in this table do not affect the overall amount in escrow:
	// only the _portion_ of the total escrow amount that is locked.
	LockedTable cid.Cid // BalanceTable

	NextID abi.DealID

	// Metadata cached for efficient iteration over deals.
	DealOpsByEpoch cid.Cid // SetMultimap, HAMT[epoch]Set
	LastCron       abi.ChainEpoch

	// Total Client Collateral that is locked -> unlocked when deal is terminated
	TotalClientLockedCollateral abi.TokenAmount
	// Total Provider Collateral that is locked -> unlocked when deal is terminated
	TotalProviderLockedCollateral abi.TokenAmount
	// Total storage fee that is locked in escrow -> unlocked when payments are made
	TotalClientStorageFee abi.TokenAmount
}

func ConstructState(store adt.Store) (*State, error) {
	emptyProposalsArrayCid, err := adt.StoreEmptyArray(store, ProposalsAmtBitwidth)
	if err != nil {
		return nil, xerrors.Errorf("failed to create empty array: %w", err)
	}
	emptyStatesArrayCid, err := adt.StoreEmptyArray(store, StatesAmtBitwidth)
	if err != nil {
		return nil, xerrors.Errorf("failed to create empty states array: %w", err)
	}

	emptyPendingProposalsMapCid, err := adt.StoreEmptyMap(store, builtin.DefaultHamtBitwidth)
	if err != nil {
		return nil, xerrors.Errorf("failed to create empty map: %w", err)
	}
	emptyDealOpsHamtCid, err := StoreEmptySetMultimap(store, builtin.DefaultHamtBitwidth)
	if err != nil {
		return nil, xerrors.Errorf("failed to create empty multiset: %w", err)
	}
	emptyBalanceTableCid, err := adt.StoreEmptyMap(store, adt.BalanceTableBitwidth)
	if err != nil {
		return nil, xerrors.Errorf("failed to create empty balance table: %w", err)
	}

	return &State{
		Proposals:        emptyProposalsArrayCid,
		States:           emptyStatesArrayCid,
		PendingProposals: emptyPendingProposalsMapCid,
		EscrowTable:      emptyBalanceTableCid,
		LockedTable:      emptyBalanceTableCid,
		NextID:           abi.DealID(0),
		DealOpsByEpoch:   emptyDealOpsHamtCid,
		LastCron:         abi.ChainEpoch(-1),

		TotalClientLockedCollateral:   abi.NewTokenAmount(0),
		TotalProviderLockedCollateral: abi.NewTokenAmount(0),
		TotalClientStorageFee:         abi.NewTokenAmount(0),
	}, nil
}

////////////////////////////////////////////////////////////////////////////////
// Deal state operations
////////////////////////////////////////////////////////////////////////////////

func (m *marketStateMutation) updatePendingDealState(rt Runtime, state *DealState, deal *DealProposal, epoch abi.ChainEpoch) (amountSlashed abi.TokenAmount, nextEpoch abi.ChainEpoch, removeDeal bool) {
	amountSlashed = abi.NewTokenAmount(0)

	everUpdated := state.LastUpdatedEpoch != EpochUndefined
	everSlashed := state.SlashEpoch != EpochUndefined

	builtin.RequireState(rt, !everUpdated || (state.LastUpdatedEpoch <= epoch), "deal updated at future epoch %d", state.LastUpdatedEpoch)

	// This would be the case that the first callback somehow triggers before it is scheduled to
	// This is expected not to be able to happen
	if deal.StartEpoch > epoch {
		return amountSlashed, EpochUndefined, false
	}

	paymentEndEpoch := deal.EndEpoch
	if everSlashed {
		builtin.RequireState(rt, epoch >= state.SlashEpoch, "current epoch less than deal slash epoch %d", state.SlashEpoch)
		builtin.RequireState(rt, state.SlashEpoch <= deal.EndEpoch, "deal slash epoch %d after deal end %d", state.SlashEpoch, deal.EndEpoch)
		paymentEndEpoch = state.SlashEpoch
	} else if epoch < paymentEndEpoch {
		paymentEndEpoch = epoch
	}

	paymentStartEpoch := deal.StartEpoch
	if everUpdated && state.LastUpdatedEpoch > paymentStartEpoch {
		paymentStartEpoch = state.LastUpdatedEpoch
	}

	numEpochsElapsed := paymentEndEpoch - paymentStartEpoch

	{
		// Process deal payment for the elapsed epochs.
		totalPayment := big.Mul(big.NewInt(int64(numEpochsElapsed)), deal.StoragePricePerEpoch)

		// the transfer amount can be less than or equal to zero if a deal is slashed before or at the deal's start epoch.
		if totalPayment.GreaterThan(big.Zero()) {
			err := m.transferBalance(deal.Client, deal.Provider, totalPayment)
			builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to transfer %v from %v to %v",
				totalPayment, deal.Client, deal.Provider)
		}
	}

	if everSlashed {
		// unlock client collateral and locked storage fee
		paymentRemaining, err := dealGetPaymentRemaining(deal, state.SlashEpoch)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to compute remaining payment")

		// unlock remaining storage fee
		err = m.unlockBalance(deal.Client, paymentRemaining, ClientStorageFee)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock remaining client storage fee")

		// unlock client collateral
		err = m.unlockBalance(deal.Client, deal.ClientCollateral, ClientCollateral)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to unlock client collateral")

		// slash provider collateral
		amountSlashed = deal.ProviderCollateral
		err = m.slashBalance(deal.Provider, amountSlashed, ProviderCollateral)
		builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "slashing balance")
		return amountSlashed, EpochUndefined, true
	}

	if epoch >= deal.EndEpoch {
		m.processDealExpired(rt, deal, state)
		return amountSlashed, EpochUndefined, true
	}

	// We're explicitly not inspecting the end epoch and may process a deal's expiration late, in order to prevent an outsider
	// from loading a cron tick by activating too many deals with the same end epoch.
	nextEpoch = epoch + DealUpdatesInterval

	return amountSlashed, nextEpoch, false
}

// Deal start deadline elapsed without appearing in a proven sector.
// Slash a portion of provider's collateral, and unlock remaining collaterals
// for both provider and client.
func (m *marketStateMutation) processDealInitTimedOut(rt Runtime, deal *DealProposal) abi.TokenAmount {
	if err := m.unlockBalance(deal.Client, deal.TotalStorageFee(), ClientStorageFee); err != nil {
		rt.Abortf(exitcode.ErrIllegalState, "failure unlocking client storage fee: %s", err)
	}
	if err := m.unlockBalance(deal.Client, deal.ClientCollateral, ClientCollateral); err != nil {
		rt.Abortf(exitcode.ErrIllegalState, "failure unlocking client collateral: %s", err)
	}

	amountSlashed := CollateralPenaltyForDealActivationMissed(deal.ProviderCollateral)
	amountRemaining := big.Sub(deal.ProviderBalanceRequirement(), amountSlashed)

	if err := m.slashBalance(deal.Provider, amountSlashed, ProviderCollateral); err != nil {
		rt.Abortf(exitcode.ErrIllegalState, "failed to slash balance: %s", err)
	}

	if err := m.unlockBalance(deal.Provider, amountRemaining, ProviderCollateral); err != nil {
		rt.Abortf(exitcode.ErrIllegalState, "failed to unlock deal provider balance: %s", err)
	}

	return amountSlashed
}

// Normal expiration. Unlock collaterals for both provider and client.
func (m *marketStateMutation) processDealExpired(rt Runtime, deal *DealProposal, state *DealState) {
	builtin.RequireState(rt, state.SectorStartEpoch != EpochUndefined, "sector start epoch undefined")

	// Note: payment has already been completed at this point (_rtProcessDealPaymentEpochsElapsed)
	err := m.unlockBalance(deal.Provider, deal.ProviderCollateral, ProviderCollateral)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed unlocking deal provider balance")

	err = m.unlockBalance(deal.Client, deal.ClientCollateral, ClientCollateral)
	builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed unlocking deal client balance")
}

func (m *marketStateMutation) generateStorageDealID() abi.DealID {
	ret := m.nextDealId
	m.nextDealId = m.nextDealId + abi.DealID(1)
	return ret
}

////////////////////////////////////////////////////////////////////////////////
// State utility functions
////////////////////////////////////////////////////////////////////////////////

func dealProposalIsInternallyValid(rt Runtime, proposal ClientDealProposal) error {
	// Note: we do not verify the provider signature here, since this is implicit in the
	// authenticity of the on-chain message publishing the deal.
	buf := bytes.Buffer{}
	err := proposal.Proposal.MarshalCBOR(&buf)
	if err != nil {
		return xerrors.Errorf("proposal signature verification failed to marshal proposal: %w", err)
	}
	err = rt.VerifySignature(proposal.ClientSignature, proposal.Proposal.Client, buf.Bytes())
	if err != nil {
		return xerrors.Errorf("signature proposal invalid: %w", err)
	}
	return nil
}

func dealGetPaymentRemaining(deal *DealProposal, slashEpoch abi.ChainEpoch) (abi.TokenAmount, error) {
	if slashEpoch > deal.EndEpoch {
		return big.Zero(), xerrors.Errorf("deal slash epoch %d after end epoch %d", slashEpoch, deal.EndEpoch)
	}

	// Payments are always for start -> end epoch irrespective of when the deal is slashed.
	if slashEpoch < deal.StartEpoch {
		slashEpoch = deal.StartEpoch
	}

	durationRemaining := deal.EndEpoch - slashEpoch
	if durationRemaining < 0 {
		return big.Zero(), xerrors.Errorf("deal remaining duration negative: %d", durationRemaining)
	}

	return big.Mul(big.NewInt(int64(durationRemaining)), deal.StoragePricePerEpoch), nil
}

// MarketStateMutationPermission is the mutation permission on a state field
type MarketStateMutationPermission int

const (
	// Invalid means NO permission
	Invalid MarketStateMutationPermission = iota
	// ReadOnlyPermission allows reading but not mutating the field
	ReadOnlyPermission
	// WritePermission allows mutating the field
	WritePermission
)

type marketStateMutation struct {
	st    *State
	store adt.Store

	proposalPermit MarketStateMutationPermission
	dealProposals  *DealArray

	statePermit MarketStateMutationPermission
	dealStates  *DealMetaArray

	escrowPermit MarketStateMutationPermission
	escrowTable  *adt.BalanceTable

	pendingPermit MarketStateMutationPermission
	pendingDeals  *adt.Set

	dpePermit    MarketStateMutationPermission
	dealsByEpoch *SetMultimap

	lockedPermit                  MarketStateMutationPermission
	lockedTable                   *adt.BalanceTable
	totalClientLockedCollateral   abi.TokenAmount
	totalProviderLockedCollateral abi.TokenAmount
	totalClientStorageFee         abi.TokenAmount

	nextDealId abi.DealID
}

func (s *State) mutator(store adt.Store) *marketStateMutation {
	return &marketStateMutation{st: s, store: store}
}

func (m *marketStateMutation) build() (*marketStateMutation, error) {
	if m.proposalPermit != Invalid {
		proposals, err := AsDealProposalArray(m.store, m.st.Proposals)
		if err != nil {
			return nil, xerrors.Errorf("failed to load deal proposals: %w", err)
		}
		m.dealProposals = proposals
	}

	if m.statePermit != Invalid {
		states, err := AsDealStateArray(m.store, m.st.States)
		if err != nil {
			return nil, xerrors.Errorf("failed to load deal state: %w", err)
		}
		m.dealStates = states
	}

	if m.lockedPermit != Invalid {
		lt, err := adt.AsBalanceTable(m.store, m.st.LockedTable)
		if err != nil {
			return nil, xerrors.Errorf("failed to load locked table: %w", err)
		}
		m.lockedTable = lt
		m.totalClientLockedCollateral = m.st.TotalClientLockedCollateral.Copy()
		m.totalClientStorageFee = m.st.TotalClientStorageFee.Copy()
		m.totalProviderLockedCollateral = m.st.TotalProviderLockedCollateral.Copy()
	}

	if m.escrowPermit != Invalid {
		et, err := adt.AsBalanceTable(m.store, m.st.EscrowTable)
		if err != nil {
			return nil, xerrors.Errorf("failed to load escrow table: %w", err)
		}
		m.escrowTable = et
	}

	if m.pendingPermit != Invalid {
		pending, err := adt.AsSet(m.store, m.st.PendingProposals, builtin.DefaultHamtBitwidth)
		if err != nil {
			return nil, xerrors.Errorf("failed to load pending proposals: %w", err)
		}
		m.pendingDeals = pending
	}

	if m.dpePermit != Invalid {
		dbe, err := AsSetMultimap(m.store, m.st.DealOpsByEpoch, builtin.DefaultHamtBitwidth, builtin.DefaultHamtBitwidth)
		if err != nil {
			return nil, xerrors.Errorf("failed to load deals by epoch: %w", err)
		}
		m.dealsByEpoch = dbe
	}

	m.nextDealId = m.st.NextID

	return m, nil
}

func (m *marketStateMutation) withDealProposals(permit MarketStateMutationPermission) *marketStateMutation {
	m.proposalPermit = permit
	return m
}

func (m *marketStateMutation) withDealStates(permit MarketStateMutationPermission) *marketStateMutation {
	m.statePermit = permit
	return m
}

func (m *marketStateMutation) withEscrowTable(permit MarketStateMutationPermission) *marketStateMutation {
	m.escrowPermit = permit
	return m
}

func (m *marketStateMutation) withLockedTable(permit MarketStateMutationPermission) *marketStateMutation {
	m.lockedPermit = permit
	return m
}

func (m *marketStateMutation) withPendingProposals(permit MarketStateMutationPermission) *marketStateMutation {
	m.pendingPermit = permit
	return m
}

func (m *marketStateMutation) withDealsByEpoch(permit MarketStateMutationPermission) *marketStateMutation {
	m.dpePermit = permit
	return m
}

func (m *marketStateMutation) commitState() error {
	var err error
	if m.proposalPermit == WritePermission {
		if m.st.Proposals, err = m.dealProposals.Root(); err != nil {
			return xerrors.Errorf("failed to flush deal dealProposals: %w", err)
		}
	}

	if m.statePermit == WritePermission {
		if m.st.States, err = m.dealStates.Root(); err != nil {
			return xerrors.Errorf("failed to flush deal states: %w", err)
		}
	}

	if m.lockedPermit == WritePermission {
		if m.st.LockedTable, err = m.lockedTable.Root(); err != nil {
			return xerrors.Errorf("failed to flush locked table: %w", err)
		}
		m.st.TotalClientLockedCollateral = m.totalClientLockedCollateral.Copy()
		m.st.TotalProviderLockedCollateral = m.totalProviderLockedCollateral.Copy()
		m.st.TotalClientStorageFee = m.totalClientStorageFee.Copy()
	}

	if m.escrowPermit == WritePermission {
		if m.st.EscrowTable, err = m.escrowTable.Root(); err != nil {
			return xerrors.Errorf("failed to flush escrow table: %w", err)
		}
	}

	if m.pendingPermit == WritePermission {
		if m.st.PendingProposals, err = m.pendingDeals.Root(); err != nil {
			return xerrors.Errorf("failed to flush pending deals: %w", err)
		}
	}

	if m.dpePermit == WritePermission {
		if m.st.DealOpsByEpoch, err = m.dealsByEpoch.Root(); err != nil {
			return xerrors.Errorf("failed to flush deals by epoch: %w", err)
		}
	}

	m.st.NextID = m.nextDealId
	return nil
}

Storage Market Actor Balance states and mutations

package market

import (
	addr "github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	"github.com/filecoin-project/go-state-types/exitcode"
	"golang.org/x/xerrors"
)

func (m *marketStateMutation) lockClientAndProviderBalances(proposal *DealProposal) error {
	if err := m.maybeLockBalance(proposal.Client, proposal.ClientBalanceRequirement()); err != nil {
		return xerrors.Errorf("failed to lock client funds: %w", err)
	}
	if err := m.maybeLockBalance(proposal.Provider, proposal.ProviderCollateral); err != nil {
		return xerrors.Errorf("failed to lock provider funds: %w", err)
	}

	m.totalClientLockedCollateral = big.Add(m.totalClientLockedCollateral, proposal.ClientCollateral)
	m.totalClientStorageFee = big.Add(m.totalClientStorageFee, proposal.TotalStorageFee())
	m.totalProviderLockedCollateral = big.Add(m.totalProviderLockedCollateral, proposal.ProviderCollateral)
	return nil
}

func (m *marketStateMutation) unlockBalance(addr addr.Address, amount abi.TokenAmount, lockReason BalanceLockingReason) error {
	if amount.LessThan(big.Zero()) {
		return xerrors.Errorf("unlock negative amount %v", amount)
	}

	err := m.lockedTable.MustSubtract(addr, amount)
	if err != nil {
		return xerrors.Errorf("subtracting from locked balance: %w", err)
	}

	switch lockReason {
	case ClientCollateral:
		m.totalClientLockedCollateral = big.Sub(m.totalClientLockedCollateral, amount)
	case ClientStorageFee:
		m.totalClientStorageFee = big.Sub(m.totalClientStorageFee, amount)
	case ProviderCollateral:
		m.totalProviderLockedCollateral = big.Sub(m.totalProviderLockedCollateral, amount)
	}

	return nil
}

// move funds from locked in client to available in provider
func (m *marketStateMutation) transferBalance(fromAddr addr.Address, toAddr addr.Address, amount abi.TokenAmount) error {
	if amount.LessThan(big.Zero()) {
		return xerrors.Errorf("transfer negative amount %v", amount)
	}
	if err := m.escrowTable.MustSubtract(fromAddr, amount); err != nil {
		return xerrors.Errorf("subtract from escrow: %w", err)
	}
	if err := m.unlockBalance(fromAddr, amount, ClientStorageFee); err != nil {
		return xerrors.Errorf("subtract from locked: %w", err)
	}
	if err := m.escrowTable.Add(toAddr, amount); err != nil {
		return xerrors.Errorf("add to escrow: %w", err)
	}
	return nil
}

func (m *marketStateMutation) slashBalance(addr addr.Address, amount abi.TokenAmount, reason BalanceLockingReason) error {
	if amount.LessThan(big.Zero()) {
		return xerrors.Errorf("negative amount to slash: %v", amount)
	}

	if err := m.escrowTable.MustSubtract(addr, amount); err != nil {
		return xerrors.Errorf("subtract from escrow: %v", err)
	}

	return m.unlockBalance(addr, amount, reason)
}

func (m *marketStateMutation) maybeLockBalance(addr addr.Address, amount abi.TokenAmount) error {
	if amount.LessThan(big.Zero()) {
		return xerrors.Errorf("cannot lock negative amount %v", amount)
	}

	prevLocked, err := m.lockedTable.Get(addr)
	if err != nil {
		return xerrors.Errorf("failed to get locked balance: %w", err)
	}

	escrowBalance, err := m.escrowTable.Get(addr)
	if err != nil {
		return xerrors.Errorf("failed to get escrow balance: %w", err)
	}

	if big.Add(prevLocked, amount).GreaterThan(escrowBalance) {
		return exitcode.ErrInsufficientFunds.Wrapf("insufficient balance for addr %s: escrow balance %s < locked %s + required %s",
			addr, escrowBalance, prevLocked, amount)
	}

	if err := m.lockedTable.Add(addr, amount); err != nil {
		return xerrors.Errorf("failed to add locked balance: %w", err)
	}
	return nil
}

// Return true when the funds in escrow for the input address can cover an additional lockup of amountToLock
func (m *marketStateMutation) balanceCovered(addr addr.Address, amountToLock abi.TokenAmount) (bool, error) {
	prevLocked, err := m.lockedTable.Get(addr)
	if err != nil {
		return false, xerrors.Errorf("failed to get locked balance: %w", err)
	}
	escrowBalance, err := m.escrowTable.Get(addr)
	if err != nil {
		return false, xerrors.Errorf("failed to get escrow balance: %w", err)
	}
	return big.Add(prevLocked, amountToLock).LessThanEqual(escrowBalance), nil
}
Storage Deal Collateral

Apart from Initial Pledge Collateral and Block Reward Collateral discussed earlier, the third form of collateral is provided by the storage provider to collateralize deals, is called Storage Deal Collateral and is held in the StorageMarketActor.

There is a minimum amount of collateral required by the protocol to provide a minimum level of guarantee, which is agreed upon by the storage provider and client off-chain. However, miners can offer a higher deal collateral to imply a higher level of service and reliability to potential clients. Given the increased stakes, clients may associate additional provider deal collateral beyond the minimum with an increased likelihood that their data will be reliably stored.

Provider deal collateral is only slashed when a sector is terminated before the deal expires. If a miner enters Temporary Fault for a sector and later recovers from it, no deal collateral will be slashed.

This collateral is returned to the storage provider when all deals in the sector successfully conclude. Upon graceful deal expiration, storage providers must wait for finality number of epochs (as defined in Finality) before being able to withdraw their StorageDealCollateral from the StorageMarketActor.

$$MinimumProviderDealCollateral = 1\% \times FILCirculatingSupply \times \frac{DealRawByte}{max(NetworkBaseline, NetworkRawBytePower)}$$

Storage Deal Flow

Deal Flow Sequence Diagram

Add Storage Deal and Power
  1. StorageClient and StorageProvider call StorageMarketActor.AddBalance to deposit funds into Storage Market.
    • StorageClient and StorageProvider can call WithdrawBalance before any deal is made.
  2. StorageClient and StorageProvider negotiate a deal off chain. StorageClient sends a StorageDealProposal to a StorageProvider.
    • StorageProvider verifies the StorageDeal by checking: - the address and signature of the StorageClient, - the proposal’s StartEpoch is after the current Epoch, - (tentative) the StorageClient did not call withdraw in the last X epochs (WithdrawBalance should take at least X epochs) - X is currently set to 0, but the setting will be re-considered in the near future. - both StorageProvider and StorageClient have sufficient available balances in StorageMarketActor.
  3. StorageProvider signs the StorageDealProposal by constructing an on-chain message.
    • StorageProvider calls PublishStorageDeals in StorageMarketActor to publish this on-chain message which will generate a DealID for each StorageDeal and store a mapping from DealID to StorageDeal. However, the deals are not active at this point.
      • As a backup, StorageClient may call PublishStorageDeals with the StorageDeal, to activate the deal if they can obtain the signed on-chain message from StorageProvider.
      • It is possible for either StorageProvider or StorageClient to try to enter into two deals simultaneously with funds available only for one. Only the first deal to commit to the chain will clear, the second will fail with error errorcode.InsufficientFunds.
    • StorageProvider calls HandleStorageDeal in StorageMiningSubsystem which will then add the StorageDeal into a Sector.
Sealing sectors
  1. Once a miner finishes packing a Sector, it generates a SectorPreCommitInfo and calls PreCommitSector or PreCommitSectorBatch with a PreCommitDeposit. It must call ProveCommitSector or ProveCommitAggregate with SectorProveCommitInfowithin some bound to recover the deposit. Initial pledge will then be required at time ofProveCommit. Initial Pledge is usually higher than PreCommitDeposit. Recovered PreCommitDepositwill count towards Initial Pledge and miners only need to top up additional funds atProveCommit. Excess PreCommitDeposit, when it is greater than Initial Pledge, will be returned to the miner. An expired PreCommitmessage will result inPreCommitDepositbeing burned. All Sectors have an explicit expiration epoch declared duringPreCommit. For sectors with deals, all deals must expire before sector expiration. The Miner gains power for this particular sector upon successful ProveCommit`. For more details on the Sectors and the different types of deals that can be included in a Sector refer to the Sector section.
Prove Storage
  1. Miners have to prove that they hold unique copies of Sectors by submitting proofs according to the Proof of SpaceTime algorithm. Miners have to prove all their Sectors in regular time intervals in order for the system to guarantee that they indeed store the data they committed to store in the deal phase.
Declare and Recover Faults
  1. Miners can call DeclareFaults to mark certain Sectors as faulty to avoid paying Sector Fault Detection Fee. Power associated with the sector will be removed at fault declaration.
  2. Miners can call DeclareFaultsRecovered to mark previously faulty sector as recovered. Power will be restored when recovered sectors pass WindowPoSt checks successfully.
  3. A sector pays a Sector Fault Fee for every proving period during which it is marked as faulty.
Skipped Faults
  1. After a WindowPoSt deadline opens, a miner can mark one of their sectors as faulty and exempted by WindowPoSt checks, hence Skipped Faults. This could avoid paying a Sector Fault Detection Fee on the whole partition.
Detected Faults
  1. If a partition misses a WindowPoSt submission deadline, all previously non-faulty sectors in the partition are detected as faulty and a Fault Detection Fee is charged.
Sector Expiration
  1. Sector expires when its expiration epoch is reached and sector expiration epoch must be greater than the expiration epoch of all its deals.
Sector Termination
  1. Termination of a sector can be triggered in two ways. One when sector remains faulty for 42 consecutive days and the other when a miner initiates a termination by calling TerminateSectors. In both cases, a TerminationFee is penalized, which is in principle equivalent to how much the sector has earned so far. Miners are also penalized for the DealCollateral that the sector contains and remaining DealPayment will be returned to clients.
Deal Payment and slashing
  1. Deal payment and slashing are evaluated lazily through updatePendingDealState called at CronTick.

Storage Deal States

All on-chain economic activities in Filecoin start with the storage deal. This section aims to explain different states of a storage deal and their relationship with other concepts in the protocol such as Power, Payment, and Collaterals.

A deal has the following states:

  • Unpublished: the deal has yet to be posted on chain.
  • Published: the deal has been published and accepted by the chain but is not yet active as the sector containing the deal has not been proven.
  • Active: the deal has been proven and not yet expired.
  • Deleted: the deal has expired or the sector containing the deal has been terminated because of faults.

Note that Unpublished and Deleted states are not tracked on chain. To reduce on-chain footprint, an OnChainDeal struct is created when a deal is published and it keeps track of a LastPaymentEpoch which defaults to -1 when a deal is in the Published state. A deal transitions into the Active state when LastPaymentEpoch is positive.

The following describes how a deal transitions between its different states. These states in the list below are on-chain states understood by the actor/VM logic.

  • Unpublished -> Published: this is triggered by StorageMarketActor.PublishStorageDeals which validates new storage deals, locks necessary funds, generates deal IDs, and registers the storage deals in StorageMarketActor.
  • Published -> Deleted: this is triggered by StorageMinerActor.ProveCommitSector or StorageMinerActor.ProveCommitAggregate during InteractivePoRep when the elapsed number of epochs between PreCommit and ProveCommit messages exceeds MAX_PROVE_COMMIT_SECTOR_EPOCH. The ProveCommit message will also trigger garbage collection on the list of published storage deals.
  • Published -> Active: this is triggered by ActivateStorageDeals after successful StorageMinerActor.ProveCommitSector or StorageMinerActor.ProveCommitAggregate. It is okay for the StorageDeal to have already started (i.e. for StartEpoch to have passed) at this point but it must not have expired.
  • Active -> Deleted: this can happen under the following conditions:
    • The deal itself has expired. This is triggered by StorageMinerActorCode._submitPowerReport which is called whenever a PoSt is submitted. Power associated with the deal will be lost, collaterals returned, and all remaining storage fees unlocked (allowing miners to call WithdrawBalance successfully).
    • The sector containing the deal has expired. This is triggered by StorageMinerActorCode._submitPowerReport which is called whenver a PoSt is submitted. Power associated with the deals in the sector will be lost, collaterals returned, and all remaining storage fees unlocked.
    • The sector containing the active deal has been terminated. This is triggered by StorageMinerActor._submitFaultReport for TerminatedFaults. No storage deal collateral will be slashed on fault declaration or detection, only on termination. A terminated fault is triggered when a sector is in the Failing state for MAX_CONSECUTIVE_FAULTS consecutive proving periods.

Given the onchain deal states and their transitions discussed above, below is a description of the relationships between onchain deal states and other economic states and activities in the protocol.

  • Power: only payload data in an Active storage deal counts towards power.
  • Deal Payment: happens on _onSuccessfulPoSt and at deal/sector expiration through _submitPowerReport, paying out StoragePricePerEpoch for each epoch since the last PoSt.
  • Deal Collateral: no storage deal collateral will be slashed for NewDeclaredFaults and NewDetectedFaults but instead some pledge collateral will be slashed given these faults’ impact on consensus power. In the event of NewTerminatedFaults, all storage deal collateral and some pledge collateral will be slashed. Provider and client storage deal collaterals will be returned when a deal or a sector has expired. If a sector recovers from Failing within the MAX_CONSECUTIVE_FAULTS threshold, deals in that sector are still considered active. However, miners may need to top up pledge collateral when they try to RecoverFaults given the earlier slashing.

Deal States Sequence Diagram

Faults

There are two main categories of faults in the Filecoin network:

  1. Storage or Sector Faults that relate with the failure to store files agreed in a deal previously due to a hardware error or malicious behaviour, and
  2. Consensus Faults that relate to a miner trying deviate from the protocol in order to gain more power than their storage deserves.

Please refer to the corresponding sections for more details.

Both Storage and Consensus Faults come with penalties that slash the miner’s collateral. See more details on the different types of collaterals in the Miner Collaterals.

Retrieval Market in Filecoin

Components

The retrieval market refers to the process of negotiating deals for a provider to serve stored data to a client. It should be highlighted that the negotiation process for the retrieval happens primarily off-chain. It is only some parts of it (mostly relating to redeeming vouchers from payment channels) that involve interaction with the blockchain.

The main components are as follows:

  • A payment channel actor
  • A protocol for making queries
  • A Data Transfer subsystem and protocol used to query retrieval miners and initiate retrieval deals
  • A chain-based content routing interface
  • A client module to query retrieval miners and initiate deals for retrieval
  • A provider module to respond to queries and deal proposals

The retrieval market operate by piggybacking on the Data Transfer system and Graphsync to handle transfer and verification, to support arbitrary selectors, and to reduce round trips. The retrieval market can support sending arbitrary payload CIDs & selectors within a piece.

The Data Transfer System is augmented accordingly to support pausing/resuming and sending intermediate vouchers to facilitate this.

Deal Flow in the Retrieval Market

Retrieval Flow

The Filecoin Retrieval Market protocol for proposing and accepting a deal works as follows:

  • The client finds a provider of a given piece with FindProviders().
  • The client queries a provider to see if it meets its retrieval criteria (via Query Protocol)
  • The client schedules a Data Transfer Pull Request passing the RetrievalDealProposal as a voucher.
  • The provider validates the proposal and rejects it if it is invalid.
  • If the proposal is valid, the provider responds with an accept message and begins monitoring the data transfer process.
  • The client creates a payment channel as necessary and a “lane” and ensures there are enough funds in the channel.
  • The provider unseals the sector as necessary.
  • The provider monitors data transfer as it sends blocks over the protocol, until it requires payment.
  • When the provider requires payment, it pauses the data transfer and sends a request for payment as an intermediate voucher.
  • The client receives the request for payment.
  • The client creates and stores a payment voucher off-chain.
  • The client responds to the provider with a reference to the payment voucher, sent as an intermediate voucher (i.e., acknowledging receipt of a part of the data and channel or lane value).
  • The provider validates the voucher sent by the client and saves it to be redeemed on-chain later
  • The provider resumes sending data and requesting intermediate payments.
  • The process continues until the end of the data transfer.

Some extra notes worth making with regard to the above process are as follows:

  • The payment channel is created by the client.
  • The payment channel is created when the provider accepts the deal, unless an open payment channel already exists between the given client and provider.
  • The vouchers are also created by the client and (a reference/identifier to these vouchers is) sent to the provider.
  • The payment indicated in the voucher is not taken out of the payment channel funds upon creation and exchange of vouchers between the client and the provider.
  • In order for money to be transferred to the provider’s payment channel side, the provider has to redeem the voucher
  • In order for money to be taken out of the payment channel, the provider has to submit the voucher on-chain and Collect the funds.
  • Both redeeming and collecting vouchers/funds can be done at any time during the data transfer, but redeeming vouchers and collecting funds involves the blockchain, which further means that it incurs gas cost.
  • Once the data transfer is complete, the client or provider may Settle the channel. There is then a 12hr period within which the provider has to submit the redeemed vouchers on-chain in order to collect the funds. Once the 12hr period is complete, the client may collect any unclaimed funds from the channel, and the provider loses the funds for vouchers they did not submit.
  • The provider can ask for a small payment ahead of the transfer, before they start unsealing data. The payment is meant to support the providers’ computational cost of unsealing the first chunk of data (where chunk is the agreed step-wise data transfer). This process is needed in order to avoid clients from carrying out a DoS attack, according to which they start several deals and cause the provider to engage a large amount of computational resources.

Bootstrapping Trust

Neither the client nor the provider have any specific reason to trust each other. Therefore, trust is established indirectly by payments for a retrieval deal done incrementally. This is achieved by sending vouchers as the data transfer progresses.

Trust establishment proceeds as follows:

  • When the deal is created, client & provider agree to a “payment interval” in bytes, which is the minimum amount of data the provider will send before each required increment.
  • They also agree to a “payment interval increment”. This means that the interval will increase by this value after each successful transfer and payment, as trust develops between client and provider.
  • Example:
    • If my “payment interval” is 1000, and my “payment interval increase” is 300, then:
    • The provider must send at least 1000 bytes before they require any payment (they may end up sending slightly more because block boundaries are uneven).
    • The client must pay (i.e., issue a voucher) for all bytes sent when the provider requests payment, provided that the provider has sent at least 1000 bytes.
    • The provider now must send at least 1300 bytes before they request payment again.
    • The client must pay (i.e., issue subsequent vouchers) for all bytes it has not yet paid for when the provider requests payment, assuming it has received at least 1300 bytes since last payment.
    • The process continues until the end of the retrieval, when the last payment will simply be for the remainder of bytes.

Data Representation in the Retrieval Market

The retrieval market works based on the Payload CID. The PayloadCID is the hash that represents the root of the IPLD DAG of the UnixFS version of the file. At this stage the file is a raw system file with IPFS-style representation. In order for a client to request for some data under the retrieval market, they have to know the PayloadCID. It is important to highlight that PayloadCIDs are not stored or registered on-chain.

package retrievalmarket

import (
	_ "embed"
	"errors"
	"fmt"

	"github.com/ipfs/go-cid"
	"github.com/ipld/go-ipld-prime"
	"github.com/ipld/go-ipld-prime/datamodel"
	"github.com/ipld/go-ipld-prime/node/bindnode"
	bindnoderegistry "github.com/ipld/go-ipld-prime/node/bindnode/registry"
	"github.com/libp2p/go-libp2p/core/peer"
	"github.com/libp2p/go-libp2p/core/protocol"
	"golang.org/x/xerrors"

	"github.com/filecoin-project/go-address"
	datatransfer "github.com/filecoin-project/go-data-transfer/v2"
	"github.com/filecoin-project/go-state-types/abi"
	"github.com/filecoin-project/go-state-types/big"
	paychtypes "github.com/filecoin-project/go-state-types/builtin/v8/paych"

	"github.com/filecoin-project/go-fil-markets/piecestore"
)

//go:generate cbor-gen-for --map-encoding Query QueryResponse DealProposal DealResponse Params QueryParams DealPayment ClientDealState ProviderDealState PaymentInfo RetrievalPeer Ask

//go:embed types.ipldsch
var embedSchema []byte

// QueryProtocolID is the protocol for querying information about retrieval
// deal parameters
const QueryProtocolID = protocol.ID("/fil/retrieval/qry/1.0.0")

// Unsubscribe is a function that unsubscribes a subscriber for either the
// client or the provider
type Unsubscribe func()

// PaymentInfo is the payment channel and lane for a deal, once it is setup
type PaymentInfo struct {
	PayCh address.Address
	Lane  uint64
}

// ClientDealState is the current state of a deal from the point of view
// of a retrieval client
type ClientDealState struct {
	DealProposal
	StoreID *uint64
	// Set when the data transfer is started
	ChannelID            *datatransfer.ChannelID
	LastPaymentRequested bool
	AllBlocksReceived    bool
	TotalFunds           abi.TokenAmount
	ClientWallet         address.Address
	MinerWallet          address.Address
	PaymentInfo          *PaymentInfo
	Status               DealStatus
	Sender               peer.ID
	TotalReceived        uint64
	Message              string
	BytesPaidFor         uint64
	CurrentInterval      uint64
	PaymentRequested     abi.TokenAmount
	FundsSpent           abi.TokenAmount
	UnsealFundsPaid      abi.TokenAmount
	WaitMsgCID           *cid.Cid // the CID of any message the client deal is waiting for
	VoucherShortfall     abi.TokenAmount
	LegacyProtocol       bool
}

func (deal *ClientDealState) NextInterval() uint64 {
	return deal.Params.nextInterval(deal.CurrentInterval)
}

type ProviderQueryEvent struct {
	Response QueryResponse
	Error    error
}

type ProviderValidationEvent struct {
	IsRestart bool
	Receiver  peer.ID
	Proposal  *DealProposal
	BaseCid   cid.Cid
	Selector  ipld.Node
	Response  *DealResponse
	Error     error
}

// ProviderDealState is the current state of a deal from the point of view
// of a retrieval provider
type ProviderDealState struct {
	DealProposal
	StoreID uint64

	ChannelID     *datatransfer.ChannelID
	PieceInfo     *piecestore.PieceInfo
	Status        DealStatus
	Receiver      peer.ID
	FundsReceived abi.TokenAmount
	Message       string
}

// Identifier provides a unique id for this provider deal
func (pds ProviderDealState) Identifier() ProviderDealIdentifier {
	return ProviderDealIdentifier{Receiver: pds.Receiver, DealID: pds.ID}
}

// ProviderDealIdentifier is a value that uniquely identifies a deal
type ProviderDealIdentifier struct {
	Receiver peer.ID
	DealID   DealID
}

func (p ProviderDealIdentifier) String() string {
	return fmt.Sprintf("%v/%v", p.Receiver, p.DealID)
}

// RetrievalPeer is a provider address/peer.ID pair (everything needed to make
// deals for with a miner)
type RetrievalPeer struct {
	Address  address.Address
	ID       peer.ID // optional
	PieceCID *cid.Cid
}

// QueryResponseStatus indicates whether a queried piece is available
type QueryResponseStatus uint64

const (
	// QueryResponseAvailable indicates a provider has a piece and is prepared to
	// return it
	QueryResponseAvailable QueryResponseStatus = iota

	// QueryResponseUnavailable indicates a provider either does not have or cannot
	// serve the queried piece to the client
	QueryResponseUnavailable

	// QueryResponseError indicates something went wrong generating a query response
	QueryResponseError
)

// QueryItemStatus (V1) indicates whether the requested part of a piece (payload or selector)
// is available for retrieval
type QueryItemStatus uint64

const (
	// QueryItemAvailable indicates requested part of the piece is available to be
	// served
	QueryItemAvailable QueryItemStatus = iota

	// QueryItemUnavailable indicates the piece either does not contain the requested
	// item or it cannot be served
	QueryItemUnavailable

	// QueryItemUnknown indicates the provider cannot determine if the given item
	// is part of the requested piece (for example, if the piece is sealed and the
	// miner does not maintain a payload CID index)
	QueryItemUnknown
)

// QueryParams - V1 - indicate what specific information about a piece that a retrieval
// client is interested in, as well as specific parameters the client is seeking
// for the retrieval deal
type QueryParams struct {
	PieceCID *cid.Cid // optional, query if miner has this cid in this piece. some miners may not be able to respond.
	//Selector                   ipld.Node // optional, query if miner has this cid in this piece. some miners may not be able to respond.
	//MaxPricePerByte            abi.TokenAmount    // optional, tell miner uninterested if more expensive than this
	//MinPaymentInterval         uint64    // optional, tell miner uninterested unless payment interval is greater than this
	//MinPaymentIntervalIncrease uint64    // optional, tell miner uninterested unless payment interval increase is greater than this
}

// Query is a query to a given provider to determine information about a piece
// they may have available for retrieval
type Query struct {
	PayloadCID  cid.Cid // V0
	QueryParams         // V1
}

// QueryUndefined is a query with no values
var QueryUndefined = Query{}

// NewQueryV0 creates a V0 query (which only specifies a payload)
func NewQueryV0(payloadCID cid.Cid) Query {
	return Query{PayloadCID: payloadCID}
}

// NewQueryV1 creates a V1 query (which has an optional pieceCID)
func NewQueryV1(payloadCID cid.Cid, pieceCID *cid.Cid) Query {
	return Query{
		PayloadCID: payloadCID,
		QueryParams: QueryParams{
			PieceCID: pieceCID,
		},
	}
}

// QueryResponse is a miners response to a given retrieval query
type QueryResponse struct {
	Status        QueryResponseStatus
	PieceCIDFound QueryItemStatus // V1 - if a PieceCID was requested, the result
	// SelectorFound   QueryItemStatus // V1 - if a Selector was requested, the result

	Size uint64 // Total size of piece in bytes
	// ExpectedPayloadSize uint64 // V1 - optional, if PayloadCID + selector are specified and miner knows, can offer an expected size

	PaymentAddress             address.Address // address to send funds to -- may be different than miner addr
	MinPricePerByte            abi.TokenAmount
	MaxPaymentInterval         uint64
	MaxPaymentIntervalIncrease uint64
	Message                    string
	UnsealPrice                abi.TokenAmount
}

// QueryResponseUndefined is an empty QueryResponse
var QueryResponseUndefined = QueryResponse{}

// PieceRetrievalPrice is the total price to retrieve the piece (size * MinPricePerByte + UnsealedPrice)
func (qr QueryResponse) PieceRetrievalPrice() abi.TokenAmount {
	return big.Add(big.Mul(qr.MinPricePerByte, abi.NewTokenAmount(int64(qr.Size))), qr.UnsealPrice)
}

// PayloadRetrievalPrice is the expected price to retrieve just the given payload
// & selector (V1)
// func (qr QueryResponse) PayloadRetrievalPrice() abi.TokenAmount {
//	return types.BigMul(qr.MinPricePerByte, types.NewInt(qr.ExpectedPayloadSize))
// }

// IsTerminalError returns true if this status indicates processing of this deal
// is complete with an error
func IsTerminalError(status DealStatus) bool {
	return status == DealStatusDealNotFound ||
		status == DealStatusFailing ||
		status == DealStatusRejected
}

// IsTerminalSuccess returns true if this status indicates processing of this deal
// is complete with a success
func IsTerminalSuccess(status DealStatus) bool {
	return status == DealStatusCompleted
}

// IsTerminalStatus returns true if this status indicates processing of a deal is
// complete (either success or error)
func IsTerminalStatus(status DealStatus) bool {
	return IsTerminalError(status) || IsTerminalSuccess(status)
}

// Params are the parameters requested for a retrieval deal proposal
type Params struct {
	Selector                CborGenCompatibleNode // V1
	PieceCID                *cid.Cid
	PricePerByte            abi.TokenAmount
	PaymentInterval         uint64 // when to request payment
	PaymentIntervalIncrease uint64
	UnsealPrice             abi.TokenAmount
}

// paramsBindnodeOptions is the bindnode options required to convert custom
// types used by the Param type
var paramsBindnodeOptions = []bindnode.Option{
	CborGenCompatibleNodeBindnodeOption,
	TokenAmountBindnodeOption,
}

func (p Params) SelectorSpecified() bool {
	return !p.Selector.IsNull()
}

func (p Params) IntervalLowerBound(currentInterval uint64) uint64 {
	intervalSize := p.PaymentInterval
	var lowerBound uint64
	var target uint64
	for target <= currentInterval {
		lowerBound = target
		target += intervalSize
		intervalSize += p.PaymentIntervalIncrease
	}
	return lowerBound
}

// OutstandingBalance produces the amount owed based on the deal params
// for the given transfer state and funds received
func (p Params) OutstandingBalance(fundsReceived abi.TokenAmount, sent uint64, inFinalization bool) big.Int {
	// Check if the payment covers unsealing
	if fundsReceived.LessThan(p.UnsealPrice) {
		return big.Sub(p.UnsealPrice, fundsReceived)
	}

	// if unsealing funds are received and the retrieval is free, proceed
	if p.PricePerByte.IsZero() {
		return big.Zero()
	}

	// Calculate how much payment has been made for transferred data
	transferPayment := big.Sub(fundsReceived, p.UnsealPrice)

	// The provider sends data and the client sends payment for the data.
	// The provider will send a limited amount of extra data before receiving
	// payment. Given the current limit, check if the client has paid enough
	// to unlock the next interval.
	minimumBytesToPay := sent // for last payment, we need to get past zero
	if !inFinalization {
		minimumBytesToPay = p.IntervalLowerBound(sent)
	}

	// Calculate the minimum required payment
	totalPaymentRequired := big.Mul(big.NewInt(int64(minimumBytesToPay)), p.PricePerByte)

	// Calculate payment owed
	owed := big.Sub(totalPaymentRequired, transferPayment)
	if owed.LessThan(big.Zero()) {
		return big.Zero()
	}
	return owed
}

// NextInterval produces the maximum data that can be transferred before more
// payment is request
func (p Params) NextInterval(fundsReceived abi.TokenAmount) uint64 {
	if p.PricePerByte.NilOrZero() {
		return 0
	}
	currentInterval := uint64(0)
	bytesPaid := fundsReceived
	if !p.UnsealPrice.NilOrZero() {
		bytesPaid = big.Sub(bytesPaid, p.UnsealPrice)
	}
	bytesPaid = big.Div(bytesPaid, p.PricePerByte)
	if bytesPaid.GreaterThan(big.Zero()) {
		currentInterval = bytesPaid.Uint64()
	}
	return p.nextInterval(currentInterval)
}

func (p Params) nextInterval(currentInterval uint64) uint64 {
	intervalSize := p.PaymentInterval
	var nextInterval uint64
	for nextInterval <= currentInterval {
		nextInterval += intervalSize
		intervalSize += p.PaymentIntervalIncrease
	}
	return nextInterval
}

// NewParamsV0 generates parameters for a retrieval deal, which is always a whole piece deal
func NewParamsV0(pricePerByte abi.TokenAmount, paymentInterval uint64, paymentIntervalIncrease uint64) Params {
	return Params{
		PricePerByte:            pricePerByte,
		PaymentInterval:         paymentInterval,
		PaymentIntervalIncrease: paymentIntervalIncrease,
		UnsealPrice:             big.Zero(),
	}
}

// NewParamsV1 generates parameters for a retrieval deal, including a selector
func NewParamsV1(pricePerByte abi.TokenAmount, paymentInterval uint64, paymentIntervalIncrease uint64, sel datamodel.Node, pieceCid *cid.Cid, unsealPrice abi.TokenAmount) (Params, error) {
	if sel == nil {
		return Params{}, xerrors.New("selector required for NewParamsV1")
	}

	return Params{
		Selector:                CborGenCompatibleNode{Node: sel},
		PieceCID:                pieceCid,
		PricePerByte:            pricePerByte,
		PaymentInterval:         paymentInterval,
		PaymentIntervalIncrease: paymentIntervalIncrease,
		UnsealPrice:             unsealPrice,
	}, nil
}

// DealID is an identifier for a retrieval deal (unique to a client)
type DealID uint64

func (d DealID) String() string {
	return fmt.Sprintf("%d", d)
}

// DealProposal is a proposal for a new retrieval deal
type DealProposal struct {
	PayloadCID cid.Cid
	ID         DealID
	Params
}

// DealProposalType is the DealProposal voucher type
const DealProposalType = datatransfer.TypeIdentifier("RetrievalDealProposal/1")

// dealProposalBindnodeOptions is the bindnode options required to convert
// custom types used by the DealProposal type; the only custom types involved
// are for Params so we can reuse those options.
var dealProposalBindnodeOptions = paramsBindnodeOptions

func DealProposalFromNode(node datamodel.Node) (*DealProposal, error) {
	if node == nil {
		return nil, fmt.Errorf("empty voucher")
	}
	dpIface, err := BindnodeRegistry.TypeFromNode(node, &DealProposal{})
	if err != nil {
		return nil, xerrors.Errorf("invalid DealProposal: %w", err)
	}
	dp, _ := dpIface.(*DealProposal) // safe to assume type
	return dp, nil
}

// DealProposalUndefined is an undefined deal proposal
var DealProposalUndefined = DealProposal{}

// DealResponse is a response to a retrieval deal proposal
type DealResponse struct {
	Status DealStatus
	ID     DealID

	// payment required to proceed
	PaymentOwed abi.TokenAmount

	Message string
}

// DealResponseType is the DealResponse usable as a voucher type
const DealResponseType = datatransfer.TypeIdentifier("RetrievalDealResponse/1")

// dealResponseBindnodeOptions is the bindnode options required to convert custom
// types used by the DealResponse type
var dealResponseBindnodeOptions = []bindnode.Option{TokenAmountBindnodeOption}

// DealResponseUndefined is an undefined deal response
var DealResponseUndefined = DealResponse{}

func DealResponseFromNode(node datamodel.Node) (*DealResponse, error) {
	if node == nil {
		return nil, fmt.Errorf("empty voucher")
	}
	dpIface, err := BindnodeRegistry.TypeFromNode(node, &DealResponse{})
	if err != nil {
		return nil, xerrors.Errorf("invalid DealResponse: %w", err)
	}
	dp, _ := dpIface.(*DealResponse) // safe to assume type
	return dp, nil
}

// DealPayment is a payment for an in progress retrieval deal
type DealPayment struct {
	ID             DealID
	PaymentChannel address.Address
	PaymentVoucher *paychtypes.SignedVoucher
}

// DealPaymentType is the DealPayment voucher type
const DealPaymentType = datatransfer.TypeIdentifier("RetrievalDealPayment/1")

// dealPaymentBindnodeOptions is the bindnode options required to convert custom
// types used by the DealPayment type
var dealPaymentBindnodeOptions = []bindnode.Option{
	SignatureBindnodeOption,
	AddressBindnodeOption,
	BigIntBindnodeOption,
	TokenAmountBindnodeOption,
}

// DealPaymentUndefined is an undefined deal payment
var DealPaymentUndefined = DealPayment{}

func DealPaymentFromNode(node datamodel.Node) (*DealPayment, error) {
	if node == nil {
		return nil, fmt.Errorf("empty voucher")
	}
	dpIface, err := BindnodeRegistry.TypeFromNode(node, &DealPayment{})
	if err != nil {
		return nil, xerrors.Errorf("invalid DealPayment: %w", err)
	}
	dp, _ := dpIface.(*DealPayment) // safe to assume type
	return dp, nil
}

var (
	// ErrNotFound means a piece was not found during retrieval
	ErrNotFound = errors.New("not found")

	// ErrVerification means a retrieval contained a block response that did not verify
	ErrVerification = errors.New("Error when verify data")
)

type Ask struct {
	PricePerByte            abi.TokenAmount
	UnsealPrice             abi.TokenAmount
	PaymentInterval         uint64
	PaymentIntervalIncrease uint64
}

// ShortfallErorr is an error that indicates a short fall of funds
type ShortfallError struct {
	shortfall abi.TokenAmount
}

// NewShortfallError returns a new error indicating a shortfall of funds
func NewShortfallError(shortfall abi.TokenAmount) error {
	return ShortfallError{shortfall}
}

// Shortfall returns the numerical value of the shortfall
func (se ShortfallError) Shortfall() abi.TokenAmount {
	return se.shortfall
}
func (se ShortfallError) Error() string {
	return fmt.Sprintf("Inssufficient Funds. Shortfall: %s", se.shortfall.String())
}

// ChannelAvailableFunds provides information about funds in a channel
type ChannelAvailableFunds struct {
	// ConfirmedAmt is the amount of funds that have been confirmed on-chain
	// for the channel
	ConfirmedAmt abi.TokenAmount
	// PendingAmt is the amount of funds that are pending confirmation on-chain
	PendingAmt abi.TokenAmount
	// PendingWaitSentinel can be used with PaychGetWaitReady to wait for
	// confirmation of pending funds
	PendingWaitSentinel *cid.Cid
	// QueuedAmt is the amount that is queued up behind a pending request
	QueuedAmt abi.TokenAmount
	// VoucherRedeemedAmt is the amount that is redeemed by vouchers on-chain
	// and in the local datastore
	VoucherReedeemedAmt abi.TokenAmount
}

// PricingInput provides input parameters required to price a retrieval deal.
type PricingInput struct {
	// PayloadCID is the cid of the payload to retrieve.
	PayloadCID cid.Cid
	// PieceCID is the cid of the Piece from which the Payload will be retrieved.
	PieceCID cid.Cid
	// PieceSize is the size of the Piece from which the payload will be retrieved.
	PieceSize abi.UnpaddedPieceSize
	// Client is the peerID of the retrieval client.
	Client peer.ID
	// VerifiedDeal is true if there exists a verified storage deal for the PayloadCID.
	VerifiedDeal bool
	// Unsealed is true if there exists an unsealed sector from which we can retrieve the given payload.
	Unsealed bool
	// CurrentAsk is the current configured ask in the ask-store.
	CurrentAsk Ask
}

var BindnodeRegistry = bindnoderegistry.NewRegistry()

func init() {
	for _, r := range []struct {
		typ     interface{}
		typName string
		opts    []bindnode.Option
	}{
		{(*Params)(nil), "Params", paramsBindnodeOptions},
		{(*DealProposal)(nil), "DealProposal", dealProposalBindnodeOptions},
		{(*DealResponse)(nil), "DealResponse", dealResponseBindnodeOptions},
		{(*DealPayment)(nil), "DealPayment", dealPaymentBindnodeOptions},
	} {
		if err := BindnodeRegistry.RegisterType(r.typ, string(embedSchema), r.typName, r.opts...); err != nil {
			panic(err.Error())
		}
	}
}

Retrieval Peer Resolver

The peer resolver is a content routing interface to discover retrieval miners that have a given Piece.

It can be backed by both a local store of previous storage deals or by querying the chain.

// PeerResolver is an interface for looking up providers that may have a piece
type PeerResolver interface {
	GetPeers(payloadCID cid.Cid) ([]RetrievalPeer, error) // TODO: channel
}

Retrieval Protocols

The retrieval market is implemented using the following libp2p service.

Name: Query Protocol Protocol ID: /fil/<network-name>/retrieval/qry/1.0.0

Request: CBOR Encoded RetrievalQuery Data Structure Response: CBOR Encoded RetrievalQueryResponse Data Structure

Retrieval Client

Client Dependencies

The Retrieval Client Depends On The Following Dependencies

  • Host: A libp2p host (set setup the libp2p protocols)
  • Filecoin Node: A node implementation to query the chain for pieces and to setup and manage payment channels
  • BlockStore: Same as one used by data transfer module
  • Data Transfer: Module used for transferring payload. Writes to the blockstore.
package retrievalmarket

import (
	"context"

	bstore "github.com/ipfs/boxo/blockstore"
	"github.com/ipfs/go-cid"

	"github.com/filecoin-project/go-address"
	"github.com/filecoin-project/go-state-types/abi"

	"github.com/filecoin-project/go-fil-markets/shared"
)

type PayloadCID = cid.Cid

// BlockstoreAccessor is used by the retrieval market client to get a
// blockstore when needed, concretely to store blocks received from the provider.
// This abstraction allows the caller to provider any blockstore implementation:
// a CARv2 file, an IPFS blockstore, or something else.
type BlockstoreAccessor interface {
	Get(DealID, PayloadCID) (bstore.Blockstore, error)
	Done(DealID) error
}

// ClientSubscriber is a callback that is registered to listen for retrieval events
type ClientSubscriber func(event ClientEvent, state ClientDealState)

type RetrieveResponse struct {
	DealID      DealID
	CarFilePath string
}

// RetrievalClient is a client interface for making retrieval deals
type RetrievalClient interface {

	// NextID generates a new deal ID.
	NextID() DealID

	// Start initializes the client by running migrations
	Start(ctx context.Context) error

	// OnReady registers a listener for when the client comes on line
	OnReady(shared.ReadyFunc)

	// Find Providers finds retrieval providers who may be storing a given piece
	FindProviders(payloadCID cid.Cid) []RetrievalPeer

	// Query asks a provider for information about a piece it is storing
	Query(
		ctx context.Context,
		p RetrievalPeer,
		payloadCID cid.Cid,
		params QueryParams,
	) (QueryResponse, error)

	// Retrieve retrieves all or part of a piece with the given retrieval parameters
	Retrieve(
		ctx context.Context,
		id DealID,
		payloadCID cid.Cid,
		params Params,
		totalFunds abi.TokenAmount,
		p RetrievalPeer,
		clientWallet address.Address,
		minerWallet address.Address,
	) (DealID, error)

	// SubscribeToEvents listens for events that happen related to client retrievals
	SubscribeToEvents(subscriber ClientSubscriber) Unsubscribe

	// V1

	// TryRestartInsufficientFunds attempts to restart any deals stuck in the insufficient funds state
	// after funds are added to a given payment channel
	TryRestartInsufficientFunds(paymentChannel address.Address) error

	// CancelDeal attempts to cancel an inprogress deal
	CancelDeal(id DealID) error

	// GetDeal returns a given deal by deal ID, if it exists
	GetDeal(dealID DealID) (ClientDealState, error)

	// ListDeals returns all deals
	ListDeals() (map[DealID]ClientDealState, error)
}

Retrieval Provider (Miner)

Provider Dependencies

The Retrieval Provider depends on the following dependencies

  • Host: A libp2p host (set setup the libp2p protocols)
  • Filecoin Node: A node implementation to query the chain for pieces and to setup and manage payment channels
  • StorageMining Subsystem: For unsealing sectors
  • BlockStore: Same as one used by data transfer module
  • Data Transfer: Module used for transferring payload. Reads from the blockstore.
package retrievalmarket

import (
	"context"

	"github.com/filecoin-project/go-state-types/abi"

	"github.com/filecoin-project/go-fil-markets/shared"
)

// ProviderSubscriber is a callback that is registered to listen for retrieval events on a provider
type ProviderSubscriber func(event ProviderEvent, state ProviderDealState)

// ProviderQueryEventSubscriber is a callback that is registered to listen for query message events
type ProviderQueryEventSubscriber func(evt ProviderQueryEvent)

// ProviderValidationSubscriber is a callback that is registered to listen for validation events
type ProviderValidationSubscriber func(evt ProviderValidationEvent)

// RetrievalProvider is an interface by which a provider configures their
// retrieval operations and monitors deals received and process
type RetrievalProvider interface {
	// Start begins listening for deals on the given host
	Start(ctx context.Context) error

	// OnReady registers a listener for when the provider comes on line
	OnReady(shared.ReadyFunc)

	// Stop stops handling incoming requests
	Stop() error

	// SetAsk sets the retrieval payment parameters that this miner will accept
	SetAsk(ask *Ask)

	// GetAsk returns the retrieval providers pricing information
	GetAsk() *Ask

	// GetDynamicAsk quotes a dynamic price for the retrieval deal by calling the user configured
	// dynamic pricing function. It passes the static price parameters set in the Ask Store to the pricing function.
	GetDynamicAsk(ctx context.Context, input PricingInput, storageDeals []abi.DealID) (Ask, error)

	// SubscribeToEvents listens for events that happen related to client retrievals
	SubscribeToEvents(subscriber ProviderSubscriber) Unsubscribe

	// SubscribeToQueryEvents subscribes to an event that is fired when a message
	// is received on the query protocol
	SubscribeToQueryEvents(subscriber ProviderQueryEventSubscriber) Unsubscribe

	// SubscribeToValidationEvents subscribes to an event that is fired when the
	// provider validates a request for data
	SubscribeToValidationEvents(subscriber ProviderValidationSubscriber) Unsubscribe

	ListDeals() map[ProviderDealIdentifier]ProviderDealState
}

// AskStore is an interface which provides access to a persisted retrieval Ask
type AskStore interface {
	GetAsk() *Ask
	SetAsk(ask *Ask) error
}

Retrieval Deal Status

package retrievalmarket

import "fmt"

// DealStatus is the status of a retrieval deal returned by a provider
// in a DealResponse
type DealStatus uint64

const (
	// DealStatusNew is a deal that nothing has happened with yet
	DealStatusNew DealStatus = iota

	// DealStatusUnsealing means the provider is unsealing data
	DealStatusUnsealing

	// DealStatusUnsealed means the provider has finished unsealing data
	DealStatusUnsealed

	// DealStatusWaitForAcceptance means we're waiting to hear back if the provider accepted our deal
	DealStatusWaitForAcceptance

	// DealStatusPaymentChannelCreating is the status set while waiting for the
	// payment channel creation to complete
	DealStatusPaymentChannelCreating

	// DealStatusPaymentChannelAddingFunds is the status when we are waiting for funds
	// to finish being sent to the payment channel
	DealStatusPaymentChannelAddingFunds

	// DealStatusAccepted means a deal has been accepted by a provider
	// and its is ready to proceed with retrieval
	DealStatusAccepted

	// DealStatusFundsNeededUnseal means a deal has been accepted by a provider
	// and payment is needed to unseal the data
	DealStatusFundsNeededUnseal

	// DealStatusFailing indicates something went wrong during a retrieval,
	// and we are cleaning up before terminating with an error
	DealStatusFailing

	// DealStatusRejected indicates the provider rejected a client's deal proposal
	// for some reason
	DealStatusRejected

	// DealStatusFundsNeeded indicates the provider needs a payment voucher to
	// continue processing the deal
	DealStatusFundsNeeded

	// DealStatusSendFunds indicates the client is now going to send funds because we reached the threshold of the last payment
	DealStatusSendFunds

	// DealStatusSendFundsLastPayment indicates the client is now going to send final funds because
	// we reached the threshold of the final payment
	DealStatusSendFundsLastPayment

	// DealStatusOngoing indicates the provider is continuing to process a deal
	DealStatusOngoing

	// DealStatusFundsNeededLastPayment indicates the provider needs a payment voucher
	// in order to complete a deal
	DealStatusFundsNeededLastPayment

	// DealStatusCompleted indicates a deal is complete
	DealStatusCompleted

	// DealStatusDealNotFound indicates an update was received for a deal that could
	// not be identified
	DealStatusDealNotFound

	// DealStatusErrored indicates a deal has terminated in an error
	DealStatusErrored

	// DealStatusBlocksComplete indicates that all blocks have been processed for the piece
	DealStatusBlocksComplete

	// DealStatusFinalizing means the last payment has been received and
	// we are just confirming the deal is complete
	DealStatusFinalizing

	// DealStatusCompleting is just an inbetween state to perform final cleanup of
	// complete deals
	DealStatusCompleting

	// DealStatusCheckComplete is used for when the provided completes without a last payment
	// requested cycle, to verify we have received all blocks
	DealStatusCheckComplete

	// DealStatusCheckFunds means we are looking at the state of funding for the channel to determine
	// if more money is incoming
	DealStatusCheckFunds

	// DealStatusInsufficientFunds indicates we have depleted funds for the retrieval payment channel
	// - we can resume after funds are added
	DealStatusInsufficientFunds

	// DealStatusPaymentChannelAllocatingLane is the status when we are making a lane for this channel
	DealStatusPaymentChannelAllocatingLane

	// DealStatusCancelling means we are cancelling an inprogress deal
	DealStatusCancelling

	// DealStatusCancelled means a deal has been cancelled
	DealStatusCancelled

	// DealStatusRetryLegacy means we're attempting the deal proposal for a second time using the legacy datatype
	DealStatusRetryLegacy

	// DealStatusWaitForAcceptanceLegacy means we're waiting to hear the results on the legacy protocol
	DealStatusWaitForAcceptanceLegacy

	// DealStatusClientWaitingForLastBlocks means that the provider has told
	// the client that all blocks were sent for the deal, and the client is
	// waiting for the last blocks to arrive. This should only happen when
	// the deal price per byte is zero (if it's not zero the provider asks
	// for final payment after sending the last blocks).
	DealStatusClientWaitingForLastBlocks

	// DealStatusPaymentChannelAddingInitialFunds means that a payment channel
	// exists from an earlier deal between client and provider, but we need
	// to add funds to the channel for this particular deal
	DealStatusPaymentChannelAddingInitialFunds

	// DealStatusErroring means that there was an error and we need to
	// do some cleanup before moving to the error state
	DealStatusErroring

	// DealStatusRejecting means that the deal was rejected and we need to do
	// some cleanup before moving to the rejected state
	DealStatusRejecting

	// DealStatusDealNotFoundCleanup means that the deal was not found and we
	// need to do some cleanup before moving to the not found state
	DealStatusDealNotFoundCleanup

	// DealStatusFinalizingBlockstore means that all blocks have been received,
	// and the blockstore is being finalized
	DealStatusFinalizingBlockstore
)

// DealStatuses maps deal status to a human readable representation
var DealStatuses = map[DealStatus]string{
	DealStatusNew:                              "DealStatusNew",
	DealStatusUnsealing:                        "DealStatusUnsealing",
	DealStatusUnsealed:                         "DealStatusUnsealed",
	DealStatusWaitForAcceptance:                "DealStatusWaitForAcceptance",
	DealStatusPaymentChannelCreating:           "DealStatusPaymentChannelCreating",
	DealStatusPaymentChannelAddingFunds:        "DealStatusPaymentChannelAddingFunds",
	DealStatusAccepted:                         "DealStatusAccepted",
	DealStatusFundsNeededUnseal:                "DealStatusFundsNeededUnseal",
	DealStatusFailing:                          "DealStatusFailing",
	DealStatusRejected:                         "DealStatusRejected",
	DealStatusFundsNeeded:                      "DealStatusFundsNeeded",
	DealStatusSendFunds:                        "DealStatusSendFunds",
	DealStatusSendFundsLastPayment:             "DealStatusSendFundsLastPayment",
	DealStatusOngoing:                          "DealStatusOngoing",
	DealStatusFundsNeededLastPayment:           "DealStatusFundsNeededLastPayment",
	DealStatusCompleted:                        "DealStatusCompleted",
	DealStatusDealNotFound:                     "DealStatusDealNotFound",
	DealStatusErrored:                          "DealStatusErrored",
	DealStatusBlocksComplete:                   "DealStatusBlocksComplete",
	DealStatusFinalizing:                       "DealStatusFinalizing",
	DealStatusCompleting:                       "DealStatusCompleting",
	DealStatusCheckComplete:                    "DealStatusCheckComplete",
	DealStatusCheckFunds:                       "DealStatusCheckFunds",
	DealStatusInsufficientFunds:                "DealStatusInsufficientFunds",
	DealStatusPaymentChannelAllocatingLane:     "DealStatusPaymentChannelAllocatingLane",
	DealStatusCancelling:                       "DealStatusCancelling",
	DealStatusCancelled:                        "DealStatusCancelled",
	DealStatusRetryLegacy:                      "DealStatusRetryLegacy",
	DealStatusWaitForAcceptanceLegacy:          "DealStatusWaitForAcceptanceLegacy",
	DealStatusClientWaitingForLastBlocks:       "DealStatusWaitingForLastBlocks",
	DealStatusPaymentChannelAddingInitialFunds: "DealStatusPaymentChannelAddingInitialFunds",
	DealStatusErroring:                         "DealStatusErroring",
	DealStatusRejecting:                        "DealStatusRejecting",
	DealStatusDealNotFoundCleanup:              "DealStatusDealNotFoundCleanup",
	DealStatusFinalizingBlockstore:             "DealStatusFinalizingBlockstore",
}

func (s DealStatus) String() string {
	str, ok := DealStatuses[s]
	if ok {
		return str
	}
	return fmt.Sprintf("DealStatusUnknown - %d", s)
}

Libraries

DRAND

DRand (Distributed Randomness) is a publicly verifiable random beacon protocol Filecoin relies on as a source of unbiasable entropy for leader election (see Secret Leader Election).

At a high-level, the drand protocol runs a series of MPCs (Multi-Party Computations) in order to produce a series of deterministic, verifiable random values. Specifically, after a trusted setup, a known (to each other) group of n drand nodes sign a given message using t-of-n threshold BLS signatures in a series of successive rounds occuring at regular intervals (the drand round time). Any node that has gathered t of the signatures can reconstruct the full BLS signature. This signature can then be hashed in order to produce a collective random value which can be verified against the collective public key generated during the trusted setup. Note that while this can be done by the drand node, the random value (i.e. hashed value) should be checked by the consumer of the beacon. In Filecoin, we hash it using blake2b in order to obtain a 256 bit output.

drand assumes that at least t of the n nodes are honest (and online – for liveness). If this threshold is broken, the adversary can permanently halt randomness production but cannot otherwise bias the randomness.

You can learn more about how drand works, by visiting its repository, or reading its specification.

In the following sections we look in turn at how the Filecoin protocol makes use of drand randomness, and at some of the characteristics of the specific drand network Filecoin uses.

Drand randomness outputs

By polling the appropriate endpoint (see below for specifics on the drand network Filecoin uses), a Filecoin node will get back a drand value formatted as follows (e.g.):

{
  "round": 367,
  "signature": "b62dd642e939191af1f9e15bef0f0b0e9562a5f570a12a231864afe468377e2a6424a92ccfc34ef1471cbd58c37c6b020cf75ce9446d2aa1252a090250b2b1441f8a2a0d22208dcc09332eaa0143c4a508be13de63978dbed273e3b9813130d5",
  "previous_signature": "afc545efb57f591dbdf833c339b3369f569566a93e49578db46b6586299422483b7a2d595814046e2847494b401650a0050981e716e531b6f4b620909c2bf1476fd82cf788a110becbc77e55746a7cccd47fb171e8ae2eea2a22fcc6a512486d"
}

Specifically, we have:

  • Signature – the threshold BLS signature on the previous signature value Previous and the current round number round.
  • PreviousSignature – the threshold BLS signature from the previous drand round.
  • Round – the index of Randomness in the sequence of all random values produced by this drand network.

Specifically, the message signed is the concatenation of the round number treated as a uint64 and the previous signature. At the moment, drand uses BLS signatures on the BLS12-381 curve with the latest v7 RFC of hash-to-curve and the signature is made over G1 (for more see the drand specification.

Polling the drand network

Filecoin nodes fetch the drand entry from the distribution network of the selected drand network.

drand distributes randomness via multiple distribution channels (HTTP servers, S3 buckets, gossiping…). Simply put, the drand nodes themselves will not be directly accessible by consumers, rather, highly-available relays will be set up to serve drand values over these distribution channels. See below section for more on the drand network configuration.

On initialization, Filecoin initializes a drand client with chain info that contains the following information:

  • Period – the period of time between each drand randomness generation
  • GenesisTime – at which the first round in the drand randomness chain is created
  • PublicKey – the public key to verify randomness
  • GenesisSeed – the seed that has been used for creating the first randomness

Note that it is possible to simply store the hash of this chain info and to retrieve the contents from the drand distribution network as well on the /info endpoint.

Thereafter, the Filecoin client can call drand’s endpoints:

  • /public/latest to get the latest randomness value produced by the beacon
  • /public/<round> to get the randoomness value produced by the beacon at a given round

Using drand in Filecoin

Drand is used as a randomness beacon for leader election in Filecoin. You can read more about that in the secret leader election algorithm of Expected Consensus. See drand used in the Filecoin lotus implementation here.

While drand returns multiple values with every call to the beacon (see above), Filecoin blocks need only store a subset of these in order to track a full drand chain. This information can then be mixed with on-chain data for use in Filecoin. See randomness for more.

Verifying an incoming drand value

Upon receiving a new drand randomness value from a beacon, a Filecoin node should immediately verify its validity. That is, it should verify:

  • that the Signature field is verified by the beacon’s PublicKey as the beacon’s signature of SHA256(PreviousSignature || Round).
  • that the Randomness field is SHA256(Signature).

See drand for an example.

Fetching the appropriate drand value while mining

There is a deterministic mapping between a needed drand round number and a Filecoin epoch number.

After initializing access to a drand beacon, a Filecoin node should have access to the following values:

  • filEpochDuration – the Filecoin network’s epoch duration (between any two leader elections)
  • filGenesisTime – the Filecoin genesis timestamp
  • filEpoch – the current Filecoin epoch
  • drandGenesisTime – drand’s genesis timestamp
  • drandPeriod – drand’s epoch duration (between any two randomness creations)

Using the above, a Filecoin node can determine the appropriate drand round value to be used for use in secret leader election in an epoch using both networks’ reliance on real time as follows:

MaxBeaconRoundForEpoch(filEpoch) {
    // determine what the latest Filecoin timestamp was from the current epoch number
    var latestTs
    if filEpoch == 0 {
        latestTs = filGenesisTime
    } else {
        latestTs = ((uint64(filEpoch) * filEpochDuration) + filGenesisTime) - filEpochDuration
    }
    // determine the drand round number corresponding to this timestamp
    // keeping in mind that drand emits round 1 at the drandGenesisTime
    dround := (latestTs - drandGenesisTime) / uint64(drandPeriod) + 1
    return dround
}

Edge cases and dealing with a drand outage

It is important to note that any drand beacon outage will effectively halt Filecoin block production. Given that new randomness is not produced, Filecoin miners cannot generate new blocks. Specifically, any call to the drand network for a new randomness entry during an outage should be blocking in Filecoin.

After a beacon downtime, drand nodes will work to quickly catch up to the current round, as defined by wall clock time. In this way, the above time-to-round mapping in drand (see above) used by Filecoin remains an invariant after this catch-up following downtime.

So while Filecoin miners were not able to mine during the drand outage, they will quickly be able to run leader election thereafter, given a rapid production of drand values. We call this a “catch up” period.

During the catch up period, Filecoin nodes will backdate their blocks in order to continue using the same time-to-round mapping to determine which drand round should be integrated according to the time. Miners can then choose to publish their null blocks for the outage period (including the appropriate drand entries throughout the blocks, per the time-to-round mapping), or (as is more likely) try to craft valid blocks that might have been created during the outage.

Note that based on the level of decentralization of the Filecoin network, we expect to see varying levels of miner collaboration during this period. This is because there are two incentives at play: trying to mine valid blocks from during the outage to collect block rewards, not falling behind a heavier chain being mined by a majority of miners that may or may not have ignored a portion of these blocks.

In any event, a heavier chain will emerge after the catch up period and mining can resume as normal.

IPFS

Filecoin is built on the same underlying stack as IPFS - including connecting nodes peer-to-peer via libp2p and addressing data using IPLD. Therefore, it borrows many concepts from the InterPlanetary File System (IPFS), such as content addressing, the CID (which, strictly speaking, is part of the Multiformats specification) and Merkle-DAGs (which is part of IPLD). It also makes direct use of Bitswap (the data transfer algorithm in IPFS) and UnixFS (the file format built on top of IPLD Merkle-Dags).

Bitswap

Bitswap is a simple peer-to-peer data exchange protocol, used primarily in IPFS, which can also be used independently of the rest of the pieces that make up IPFS. In Filecoin, Bitswap is used to request and receive blocks when a node is synchonized (“caught up”) but GossipSub has failed to deliver some blocks to a node.

Please refer to the Bitswap specification for more information.

UnixFS

UnixFS is a protocol buffers-based format for describing files, directories, and symlinks in IPFS. UnixFS is used in Filecoin as a file formatting guideline for files submitted to the Filecoin network.

Please refer to the UnixFS specification for more information.

Multiformats

Multiformats is a set of self-describing protocol values. These values are useful both to the data layer (IPLD) and to the network layer (libp2p). Multiformats includes specifications for the Content Identifier (CID) used by IPLD and IPFS, the multicodec, multibase and multiaddress (used by libp2p).

Please refer to the Multiformats repository for more information.

CIDs

Filecoin references data using IPLD’s Content Identifier (CID).

A CID is a hash digest prefixed with identifiers for its hash function and codec. This means you can validate and decode data with only this identifier.

When CIDs are printed as strings they also use multibase to identify the base encoding being used.

For a more detailed specification, please see the CID specification.

Multihash

A Multihash is a set of self-describing hash values. Multihash is used for differentiating outputs from various well-established cryptographic hash functions, while addressing size and encoding considerations.

Please refer to the Multihash specification for more information.

Multiaddr

A Multiadddress is a self-describing network address. Multiaddresses are composable and future-proof network addresses used by libp2p.

Please refer to the Multiaddr specification for more information.

IPLD

The InterPlanetary Linked Data or IPLD is the data model of the content-addressable web. It provides standards and formats to build Merkle-DAG data-structures, like those that represent a filesystem. IPLD allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data via hashes as instances of IPLD. This means that data can be linked and referenced from totally different data structures in a global namespace. This is a very useful feature that is used extensively in Filecoin.

IPLD introduces several concepts and protocols, such as the concept of content addressing itself, codecs such as DAG-CBOR, file formats such as Content Addressable aRchives (CARs), and protocols such as GraphSync.

Please refer to the IPLD specifications repository for more information.

DAG-CBOR encoding

All Filecoin system data structures are stored using DAG-CBOR (which is an IPLD codec). DAG-CBOR is a more strict subset of CBOR with a predefined tagging scheme, designed for storage, retrieval and traversal of hash-linked data DAGs.

Files and data stored on the Filecoin network are also stored using various IPLD codecs (not necessarily DAG-CBOR). IPLD provides a consistent and coherent abstraction above data that allows Filecoin to build and interact with complex, multi-block data structures, such as HAMT and AMT. Filecoin uses the DAG-CBOR codec for the serialization and deserialization of its data structures and interacts with that data using the IPLD Data Model, upon which various tools are built. IPLD Selectors are also used to address specific nodes within a linked data structure (see GraphSync below).

Please refer to the DAG-CBOR specification for more information.

Content Addressable aRchives (CARs)

The Content Addressable aRchives (CAR) format is used to store content addressable objects in the form of IPLD block data as a sequence of bytes; typically in a file with a .car filename extension.

The CAR format is used to produce a Filecoin Piece (the main representation of files in Filecoin) by serialising its IPLD DAG. The .car file then goes through further transformations to produce the Piece CID.

Please refer to the CAR specification for further information.

GraphSync

GraphSync is a request-response protocol that synchronizes parts of a graph (an authenticated Directed Acyclic Graph - DAG) between different peers. It uses selectors to identify the specific subset of the graph to be synchronized between different peers.

GraphSync is used by Filecoin in order to synchronize parts of the blockchain.

Please refer to the GraphSync specification for more information.

Libp2p

Libp2p is a modular network protocol stack for peer-to-peer networks. It consists of a catalogue of modules from which p2p network developers can select and reuse just the protocols they need, while making it easy to upgrade and interoperate between applications. This includes several protocols and algorithms to enable efficient peer-to-peer communication like peer discovery, peer routing and NAT Traversal. While libp2p is used by both IPFS and Filecoin, it is a standalone stack that can be used independently of these systems as well.

There are several implementations of libp2p, which can be found at the libp2p GitHub repositoriy. The specification of libp2p can be found in its specs repo and its documentation at https://docs.libp2p.io.

Below we discuss how some of libp2p’s components are used in Filecoin.

DHT

The Kademlia DHT implementation of libp2p is used by Filecoin for peer discovery and peer exchange. Libp2p’s PeerID is used as the ID scheme for Filecoin storage miners and more generally Filecoin nodes. One way that clients find miner information, such as a miner’s address, is by using the DHT to resolve the associated PeerID to the miner’s Multiaddress.

The Kademlia DHT implementation of libp2p in go can be found in its GitHub repository.

GossipSub

GossipSub is libp2p’s pubsub protocol. Filecoin uses GossipSub for message and block propagation among Filecoin nodes. The recent hardening extensions of GossipSub include a number of techniques to make it robust against a variety of attacks.

Please refer to GossipSub’s Spec section, or the protocol’s more complete specification for details on its design, implementation and parameter settings. A technical report is also available, which discusses the design rationale of the protocol.

Algorithms

Expected Consensus

Algorithm

Expected Consensus (EC) is a probabilistic Byzantine fault-tolerant consensus protocol. At a high level, it operates by running a leader election every epoch in which, on expectation, a set number of participants may be eligible to submit a block. EC guarantees that these winners will be anonymous until they reveal themselves by submitting a proof that they have been elected, the ElectionProof. Each winning miner can submit one such proof per round and will be rewarded proportionally to its power. From this point on, each wining miner also creates a proof of storage (aka Winning PoSt). Each proof can be derived from a properly formatted beacon entry, as described below.

All valid blocks submitted in a given round form a Tipset. Every block in a Tipset adds weight to its chain. The ‘best’ chain is the one with the highest weight, which is to say that the fork choice rule is to choose the heaviest known chain. For more details on how to select the heaviest chain, see Chain Selection. While on expectation at least one block will be generated at every round, in cases where no one finds a block in a given round, a miner can simply run leader election again for the next epoch with the appropriate random seed, thereby ensuring liveness in the protocol.

The randomness used in the proofs is generated from DRAND, an unbiasable randomness generator, through a beacon. When the miner wants to publish a new block, they invoke the getRandomness function providing the chain height (i.e., epoch) as input. The randomness value is returned through the DRAND beacon and included in the block. For the details of DRAND and its implementation, please consult the project’s documentation and specification.

The Storage Power Consensus subsystem uses access to EC to use the following facilities:

  • Access to verifiable randomness for the protocol, derived from Tickets.
  • Running and verifying leader election for block generation.
  • Access to a weighting function enabling Chain Selection by the chain manager.
  • Access to the most recently finalized tipset available to all protocol participants.

Tickets in EC

There are two kinds of tickets:

  1. ElectionProof ticket: which is the VRF that runs based on DRAND input. In particular, the miner gets the DRAND randomness beacon and gives it as input to the VRF together with the miner’s worker’s key.
  2. the ticket is generated using the VRF as above, but the input includes the concatenation of the previous ticket. This means that the new ticket is generated running the VRF on the old ticket concatenated with the new DRAND value (and the key as before).
## Get Randomness value from  DRAND beacon, by giving the current epoch as input.
Drand_value = GetRandmness(current epoch)

## Create ElectionProof ticket by calling VRF and  giving the secret key of the miner's worker and the DRAND value obtained in the previous step
Election_Proof = VRF(sk, drand_value)

## Extend the VRF ticket chain by concatenating the previous proof/ticket with the current one by following the same process as above (i.e., call VRF function with the secret key of the miner's worker and the DRAND value of the current epoch).
VRF chain: new_ticket = VRF(sk, drand_value || previous ticket)

Within Storage Power Consensus (SPC), a miner generates a new ticket for every block on which they run a leader election. This means that the ticket chain is always as long as the blockchain.

Through the use of VRFs and thanks to the unbiasable design of DRAND, we achieve the following two properties.

  • Ensure leader secrecy: meaning a block producer will not be known until they release their block to the network.
  • Prove leader election: meaning a block producer can be verified by any participant in the network.

Secret Leader Election

Expected Consensus is a consensus protocol that works by electing a miner from a weighted set in proportion to their power. In the case of Filecoin, participants and powers are drawn from the The Power Table, where power is equivalent to storage provided over time.

Leader Election in Expected Consensus must be Secret, Fair and Verifiable. This is achieved through the use of randomness used to run the election. In the case of Filecoin’s EC, the blockchain uses Beacon Entries provided by a drand beacon. These seeds are used as unbiasable randomness for Leader Election. Every block header contains an ElectionProof derived by the miner using the appropriate seed. As noted earlier, there are two ways through which randomness can be used in the Filecoin EC: i) through the ElectionProof ticket, and ii) through the VRF ticket chain.

Running a leader election

The miner whose block has been submitted must be checked to verify that they are eligible to mine a block in this round, i.e., they have not been slashed in the previous round.

Design goals here include:

  • Miners should be rewarded proportional to their power in the system
  • The system should be able to tune how many blocks are put out per epoch on expectation (hence “expected consensus”).

A miner will use the ElectionProof ticket to uniformly draw a value from 0 to 1 when crafting a block.

Winning a block

Step 1: Check for leader election

A miner checks if they are elected for the current epoch by running GenerateElectionProof.

Recall that a miner is elected proportionally to their quality adjusted power at ElectionPowerTableLookback.

A requirement for setting ElectionPowerTableLookback is that it must be larger than finality. This is because if ElectionPowerTableLookback is shorter, a malicious miner could create sybils with different VRF keys to increase the chances of election and then fork the chain to assign power to those keys.

The steps of this well-known attack in Proof of Stake systems would be:

  1. The miner generates keys used in the VRF part of the election until they find a key that would allow them to win.
  2. The miner forks the chain and creates a miner with the winning key.

This is generally a problem in Proof of Stake systems where the stake table is read from the past to make sure that no staker can do a transfer of stake to a new key that they found to be winning.

Step 2: Generate a storage proof (WinningPoSt)

An elected miner gets the randomness value through the DRAND randomness generator based on the current epoch and uses it to generate WinningPoSt.

WinningPoSt uses the randomness to select a sector for which the miner must generate a proof. If the miner is not able to generate this proof within some predefined amount of time, then they will not be able to create a block. The sector is chosen from the power table WinningPoStSectorSetLookback epochs in the past.

Similarly to ElectionPowerTableLookback, a requirement for setting WinningPoStSectorSetLookback is that it must be larger than finality. This is to enforce that a miner cannot play with the power table and change which sector is challenged for WinningPoSt (i.e., set the challenged sector to one of their preference).

If WinningPoStSectorSetLookback is not longer than finality, a miner could try to create forks to change their sectors allocation to get a more favourable sector to be challenged for example. A simple attack could unfold as follows:

  • The power table at epoch X shows that the attacker has sectors 1, 2, 3, 4, 5.
  • The miner decides to not store sector 5.
  • The miner wins the election at epoch X.
    • Main fork: Miner is asked a WinningPoSt for sector 5 for which they won’t be able to provide a proof.
    • The miner creates a fork, terminates sector 5 in epochs before X.
    • At X, the miner is now challenged a different sector (not 5).

Note that there are variants of this attack in which the miner introduces a new sector to change which sector will be challenged.

What happens if a sector expired after WinningPoStSectorSetLookback?

An expired sector will not be challenged during WindowPoSt (hence not penalized after its expiration). However, an edge case around the fact that WinningPoStSectorSetLookback is longer than finality is that due to the lookback, a miner can be challenged for an expired sector between expirationEpoch and expirationEpoch + WinningPoStSectorSetLookback - 1. Therefore, it is important that miners keep an expired sector for WinningPoStSectorSetLookback more epochs after expiration, or they will not be able to generate a WinningPoSt (and get the corresponding reward).

Example:

  • At epoch X:
    • Sector expires and miner deletes the sector.
  • At epoch X+WinningPoStSectorSetLookback-1:
    • The expired sector gets selected for WinningPoSt
    • The miner will not be able to generate the WinningPoSt and they will not win the sector.

Step 3: Block creation and propagation

If the above is successful, miners build a block and propagate it.

GenerateElectionProof

GenerateElectionProof outputs the result of whether a miner won the block or not as well as the quality of the block mined.

The “WinCount” is an integer that is used for weight and block reward calculations. For example a WinCount equal to “2” is equivalent as two blocks of quality “1”.

High level algorithm
  • Get the percentage of power at block ElectionPowerTableLookback
    • Get the power of the miner at block ElectionPowerTableLookback
    • Get the total network power at block ElectionPowerTableLookback
  • Get randomness for the current epoch using GetRandomness.
  • Generate a VRF and compute its hash
    • The storage miner’s workerKey is given as input in the VRF process
  • Compute WinCount: The smaller the hash is, the higher the WinCount will be
    • Compute the probability distribution of winning k blocks (i.e. Poisson distribution see below for details)
    • Let h_n be the normalised VRF, i.e. h_n = vrf.Proof/2^256, where vrf.Proof is the ElectionProof ticket.
    • The probability of winning one block is 1-P[X=0], where X is a Poisson random variable following Poisson distribution with parameter lambda = MinerPowerShare*ExpectedLeadersPerEpoch. Thus, if h_n is less than 1-P[X=0], the miner wins at least one block.
    • Similarly if h_n is less than 1-P[X=0]-P[X=1] we have at least two blocks and so on.
    • While it is not permitted for a single miner to publish two distinct blocks, in this case, the miner produces a single block which earns two block rewards
Explanations - Poisson Sortition

Filecoin is building on the principle that a miner possessing X% of network power should win as many times as X miners with 1% of network power in the election algorithm.

A straightforward solution to model the situation is using a Binomial distribution with parameterp=MinerPower/TotalPower and n=ExpectedLeadersPerEpoch. However, given that we effectively want every miner to roll an uncorrelated/independent dice and want to be invariant to miner pooling, it turns out that Poisson is the ideal distribution for our case.

Despite this finding, we wanted to assess the difference between the two distributions in terms of the probability mass function.

Using lambda = MinerPower*ExepectedLeader as the parameter for the Poisson distribution, and assuming TotalPower =10000, minerPower = 3300 and ExpectedLeaderPerEpoch = 5, we find (see table) that the probability mass function for the Binomial and the Poisson distributions do not differ much anyway.

k Binomial Poisson
0 0.19197 0.19205
1 0.31691 0.31688
2 0.26150 0.26143
3 0.14381 0.14379
4 0.05930 0.05931
5 0.01955 0.01957
6 0.00537 0.00538
7 0.00126 0.00127
8 0.00026 0.00026
9 0.00005 0.00005

Justification for the need of WinCount

It should not be possible for a miner to split their power into multiple identities and have more chances of winning more blocks than keeping their power under one identity. In particular, Strategy 2 below should not be possible to achieve.

  • Strategy 1: A miner with X% can run a single election and win a single block.
  • Strategy 2: The miner splits its power in multiple sybil miners (with the sum still equal to X%), running multiple elections to win more blocks.

WinCount guarantees that a lucky single block will earn the same reward as the reward that the miner would earn had they split their power into multiple sybils.

Alternative Options for the Distribution/Sortition

Bernoulli, Binomial and Poisson distributions have been considered for the WinCount of a miner with power p out of a total network power of N. There are the following options:

  • Option 1: WinCount(p,N) ~ Bernoulli(pE/N)
  • Option 2: WinCount(p,N) ~ Binomial(E, p/N)
  • Option 3: WinCount(p,N) ~ Binomial(p, E/N)
  • Option 4: WinCount(p,N) ~ Binomial(p/M, ME/N)
  • Option 5: WinCount(p,N) ~ Poisson(pE/N)

Note that in Options 2-5 the expectation of the win-count grows linearly with the miner’s power p. That is, 𝔼[WinCount(p,N)] = pE/N. For Option 1 this property does not hold when p/N > 1/E.

Furthermore, in Options 1, 3 and 5 the _WinCount distribution is invariant to the number of Sybils in the system. In particular WinCount(p,N)=2WinCount(p/2,N), which is a desirable property.

In Option 5 (the one used in Filecoin Leader Election), the ticket targets for each _WinCount k that range from 1 to mE (with m=2 or 3) shall approximate the upside-down CDF of a Poisson distribution with rate λ=pE/N, or explicitly, 2²⁵⁶(1-exp(-pE/N)∑ᵏ⁻¹ᵢ₌₀(pE/N)ⁱ/(i!)).

Rationale for the Poisson Sortition choice

  • Option 1 - Bernoulli(pE/N): this option is easy to implement, but comes with a drawback: if the miner’s power exceeds 1/E, the miner’s WinCount is always 1, but never higher than 1.
  • Option 2 - Binomial(E, p/N): the expectation of WinCount stays the same irrespectively of whether the miner splits their power into more than one Sybil nodes, but the variance increases if they choose to Sybil. Risk-seeking miners will prefer to Sybil, while risk-averse miners will prefer to pool, none of which is a behaviour the protocol should encourage. This option is not computationally-expensive as it would involve calculation of factorials and fixed-point multiplications (or small integer exponents) only.
  • Option 3 - Binomial(p, E/N): this option is computationally inefficient. It involves very large integer exponents.
  • Option 4 - Binomial(p/M, ME/N): the complexity of this option depends on the value of M. A small M results in high computational cost, similarly to Option 3. A large M, on the other hand, leads to a situation similar to that of Option 2, where a risk-seeking miner is incentivized to Sybil. Clearly none of these are desirable properties.
  • Option 5 - Poisson(pE/N): the chosen option presents the implementation difficulty of having to hard-code the co-efficients (see below), but overcomes all of the problems of the previous options. Furthermore, the expensive part, that is calculating exp(λ), or exp(-pE/N) has to be calculated only once.

Coefficient Approximation

We have used the Horner rule with 128-bit fixed-point coefficients in decimal, in order to approximate the co-efficients of exp(-x). The coefficients are given below:

(x * (x * (x * (x * (x * (x * (x * (
-648770010757830093818553637600
*2^(-128)) +
67469480939593786226847644286976
*2^(-128)) +
-3197587544499098424029388939001856
*2^(-128)) +
89244641121992890118377641805348864
*2^(-128)) +
-1579656163641440567800982336819953664
*2^(-128)) +
17685496037279256458459817590917169152
*2^(-128)) +
-115682590513835356866803355398940131328
*2^(-128))
+ 1) /
(x * (x * (x * (x * (x * (x * (x * (x * (x * (x * (x * (x * (x * (
1225524182432722209606361
*2^(-128)) +
114095592300906098243859450
*2^(-128)) +
5665570424063336070530214243
*2^(-128)) +
194450132448609991765137938448
*2^(-128)) +
5068267641632683791026134915072
*2^(-128)) +
104716890604972796896895427629056
*2^(-128)) +
1748338658439454459487681798864896
*2^(-128)) +
23704654329841312470660182937960448
*2^(-128)) +
259380097567996910282699886670381056
*2^(-128)) +
2250336698853390384720606936038375424
*2^(-128)) +
14978272436876548034486263159246028800
*2^(-128)) +
72144088983913131323343765784380833792
*2^(-128)) +
224599776407103106596571252037123047424
*2^(-128))
+ 1)
Implementation Guidelines
  • The ElectionProof ticket struct in the block header has two fields:
    • vrf.Proof, the output of the VRF, or ElectionProof ticket.
    • WinCount that corresponds to the result of the Poisson Sortition.
  • WinCount needs to be > 0 for winning blocks.
  • WinCount is included in the tipset weight function. The sum of WinCounts of a tipset replaces the size of tipset factor in the weight function.
  • WinCount is passed to Reward actor to increase the reward for blocks winning multiple times.
GenerateElectionProof(epoch) {
  electionProofInput := GetRandomness(DomainSeparationTag_ElectionProofProduction, epoch, CBOR_Serialize(miner.address))
  vrfResult := miner.VRFSecretKey.Generate(electioinProofInput)

  if GetWinCount(vrfResult.Digest,minerID,epoch)>0 {
    return vrfResult.Proof, GetWinCount(vrfResult.Digest,minerID,epoch)
  }
  return nil
}

GetWinCount(proofDigest, minerID,epoch) {
  // for SHA256, more generally it is 2^len(H)
    const maxDigestSize = 2^256
    minerPower = GetminerPower(minerID, epoch-PowerTableLookback)
    TotalPower = GetTotalPower(epoch-PowerTableLookback)
    if minerPower = 0 {
        return 0
    }
    lambda = minerPower/totalPower*ExpectedLeaderPerEpoch
    h = hash(proofDigest)/maxDigestSize
    rhs = 1 - PoissPmf(lambda, 0)

    WinCount = 0
    for h < rhs {
      WinCount++
      rhs -= PoissPmf(lambda, WinCount)
    }
    return WinCount
}

Leader Election Verification

In order to verify that the leader ElectionProof ticket in a block is correct, miners perform the following checks:

  • Verify that the randomness is correct by checking GetRandomness(epoch)
  • Use this randomness to verify the VRF correctness Verify_VRF(vrf.Proof,beacon,public_key), where vrf.Proof is the ElectionProof ticket.
  • Verify ElectionProof.WinCount > 0 by checking GetWinCount(vrf.Proof, miner,epoch), where vrf.Proof is the ElectionProof ticket.

Chain Selection

Just as there can be 0 miners win in a round, there can equally be multiple miners elected in a given round. This in turn means multiple blocks can be created in a round, as seen above. In order to avoid wasting valid work done by miners, EC makes use of all valid blocks generated in a round.

Chain Weighting

It is possible for forks to emerge naturally in Expected Consensus. EC relies on weighted chains in order to quickly converge on ‘one true chain’, with every block adding to the chain’s weight. This means the heaviest chain should reflect the most amount of work performed, or in Filecoin’s case, the biggest amount of committed storage.

In short, the weight at each block is equal to its ParentWeight plus that block’s delta weight. Details of Filecoin’s chain weighting function are included here.

Delta weight is a term composed of a few elements:

  • wPowerFactor: which adds weight to the chain proportional to the total power backing the chain, i.e. accounted for in the chain’s power table.
  • wBlocksFactor: which adds weight to the chain proportional to the number of tickets mined in a given epoch. It rewards miner cooperation (which will yield more blocks per round on expectation).

The weight should be calculated using big integer arithmetic with order of operations defined above. We use brackets instead of parentheses below for legibility. We have:

w[r+1] = w[r] + (wPowerFactor[r+1] + wBlocksFactor[r+1]) * 2^8

For a given tipset ts in round r+1, we define:

  • wPowerFactor[r+1] = wFunction(totalPowerAtTipset(ts))
  • wBlocksFactor[r+1] = wPowerFactor[r+1] * wRatio * t / e
    • with t = |ticketsInTipset(ts)|
    • e = expected number of tickets per round in the protocol
    • and wRatio in ]0, 1[ Thus, for stability of weight across implementations, we take:
  • wBlocksFactor[r+1] = (wPowerFactor[r+1] * b * wRatio_num) / (e * wRatio_den)

We get:

w[r+1] = w[r] + wFunction(totalPowerAtTipset(ts)) * 2^8 + (wFunction(totalPowerAtTipset(ts)) * len(ts.tickets) * wRatio_num * 2^8) / (e * wRatio_den)

Using the 2^8 here to prevent precision loss ahead of the division in the wBlocksFactor.

The exact value for these parameters remain to be determined, but for testing purposes, you may use:

  • e = 5
  • wRatio = .5, or wRatio_num = 1, wRatio_den = 2
  • wFunction = log2b with
    • log2b(X) = floor(log2(x)) = (binary length of X) - 1 and log2b(0) = 0. Note that that special case should never be used (given it would mean an empty power table).
Note that if your implementation does not allow for rounding to the fourth decimal, miners should apply the tie-breaker below. Weight changes will be on the order of single digit numbers on expectation, so this should not have an outsized impact on chain consensus across implementations.

ParentWeight is the aggregate chain weight of a given block’s parent set. It is calculated as the ParentWeight of any of its parent blocks (all blocks in a given Tipset should have the same ParentWeight value) plus the delta weight of each parent. To make the computation a bit easier, a block’s ParentWeight is stored in the block itself (otherwise potentially long chain scans would be required to compute a given block’s weight).

Selecting between Tipsets with equal weight

When selecting between Tipsets of equal weight, a miner chooses the one with the smallest final ElectionProof ticket.

In the case where two Tipsets of equal weight have the same minimum VRF output, the miner will compare the next smallest ticket in the Tipset (and select the Tipset with the next smaller ticket). This continues until one Tipset is selected.

The above case may happen in situations under certain block propagation conditions. Assume three blocks B, C, and D have been mined (by miners 1, 2, and 3 respectively) off of block A, with minTicket(B) < minTicket(C) < minTicket(D).

Miner 1 outputs their block B and shuts down. Miners 2 and 3 both receive B but not each others’ blocks. We have miner 2 mining a Tipset made of B and C and miner 3 mining a Tipset made of B and D. If both succesfully mine blocks now, other miners in the network will receive new blocks built off of Tipsets with equal weight and the same smallest VRF output (that of block B). They should select the block mined atop [B, C] since minVRF(C) < minVRF(D).

The probability that two Tipsets with different blocks would have all the same VRF output can be considered negligible: this would amount to finding a collision between two 256-bit (or more) collision-resistant hashes. Behaviour is explicitly left unspecified in this case.

Finality in EC

EC enforces a version of soft finality whereby all miners at round N will reject all blocks that fork off prior to round N-F. For illustrative purposes, we can take F to be 900. While strictly speaking EC is a probabilistically final protocol, choosing such an F simplifies miner implementations and enforces a macroeconomically-enforced finality at no cost to liveness in the chain.

Consensus Faults

Due to the existence of potential forks in EC, a miner can try to unduly influence protocol fairness. This means they may choose to disregard the protocol in order to gain an advantage over the power they should normally get from their storage on the network. A miner should be slashed if they are provably deviating from the honest protocol.

This is detectable when a given miner submits two blocks that satisfy any of the following “consensus faults”. In all cases, we must have:

  • both blocks were mined by the same miner
  • both blocks have valid signatures
  • first block’s epoch is smaller or equal than second block

Types of faults

  1. Double-Fork Mining Fault: two blocks mined at the same epoch (even if they have the same tipset).

  2. Time-Offset Mining Fault: two blocks mined off of the same Tipset at different epochs.

  3. Parent-Grinding Fault: one block’s parent is a Tipset that provably should have included a given block but does not. While it cannot be proven that a missing block was willfully omitted in general (i.e. network latency could simply mean the miner did not receive a particular block), it can when a miner has successfully mined a block two epochs in a row and omitted one. That is, this condition should be evoked when a miner omits their own prior block. Specifically, this can be proven with a “witness” block, that is by submitting blocks B2, B3, B4 where B2 is B4’s parent and B3’s sibling but B3 is not B4’s parent. - !B4.Parents.Include(B3) && B4.Parents.Include(B2) && B3.Parents == B2.Parents && B3.Epoch == B2.Epoch

    Parent-Grinding fault

Penalization for faults

A single consensus fault results into:

  • miner suspension
  • loss of all pledge collateral (which includes the initial pledge and blocks rewards yet to be vested)

Detection and Reporting

A node that detects and reports a consensus fault is called “slasher”. Any user in Filecoin can be a slasher. They can report consensus faults by calling the ReportConsensusFault on the StorageMinerActor of the faulty miner. The slasher is rewarded with a portion of the penalty paid by the offending miner’s ConsensusFaultPenalty for notifying the network of the consensus fault. Note that some slashers might not get the full reward because of the low balance of the offending miners. However rational honest miners are still incentivised to notify the network about consensus faults.

The reward given to the slasher is a function of some initial share (SLASHER_INITIAL_SHARE) and growth rate (SLASHER_SHARE_GROWTH_RATE) and it has a maximum maxReporterShare. Slasher’s share increases exponentially as epoch elapses since the block when the fault is committed (see RewardForConsensusSlashReport). Only the first slasher gets their share of the pledge collateral and the remaining pledge collateral is burned. The longer a slasher waits, the higher the likelihood that the slashed collateral will be claimed by another slasher.

Proof-of-Storage

Preliminaries

Storage miners in the Filecoin network have to prove that they hold a copy of the data at any given point in time. This is realised through the Storage Miner Actor who is the main player in the Storage Mining subsystem. The proof that a storage miner indeed keeps a copy of the data they have promised to store is achieved through “challenges”, that is, by providing answers to specific questions posed by the system. In order for the system to be able to prove that a challenge indeed proves that the miner stores the data, the challenge has to: i) target a random part of the data and ii) be requested at a time interval such that it is not possible, profitable, or rational for the miner to discard the copy of data and refetch it when challenged.

General Proof-of-Storage schemes allow a user to check if a storage provider is storing the outsourced data at the time of the challenge. How can we use PoS schemes to prove that some data was being stored throughout a period of time? A natural answer to this question is to require the user to repeatedly (e.g. every minute) send challenges to the storage provider. However, the communication complexity required in each interaction can be the bottleneck in systems such as Filecoin, where storage providers are required to submit their proofs to the blockchain network.

To address this question, we introduce a new proof, called Proof-of-Spacetime, where a verifier can check if a prover has indeed stored the outsourced data they committed to over (storage) Space and over a period of Time (hence, the name Spacetime).

Recall that in the Filecoin network, miners are storing data in fixed-size sectors. Sectors are filled with client data agreed through regular deals in the Storage Market, through verified deals, or with random client data in case of Committed Capacity sectors.

Proof-of-Replication (PoRep)

In order to register a sector with the Filecoin network, the sector has to be sealed. Sealing is a computation-heavy process that produces a unique representation of the data in the form of a proof, called Proof-of-Replication or PoRep.

The PoRep proof ties together: i) the data itself, ii) the miner actor that performs the sealing and iii) the time when the specific data has been sealed by the specific miner. In other words, if the same miner attempts to seal the same data at a later time, then this will result in a different PoRep proof. Time is included as the blockchain height when sealing took place and the corresponding chain reference is called SealRandomness.

Once the proof has been generated, the miner runs a SNARK on the proof in order to compress it and submits the result to the blockchain. This constitutes a certification that the miner has indeed replicated a copy of the data they agreed to store.

The PoRep process includes the following two phases:

Proof-of-Spacetime (PoSt)

From this point onwards, miners have to prove that they continuously store the data they pledged to store. Proof-of-Spacetime (PoSt) is a procedure during which miners are given cryptographic challenges that can only be correctly answered if the miner is actually storing a copy of the sealed data.

There are two types of challenges (and their corresponding mechanisms) that are realised as part of the PoSt process, namely, WinningPoSt and WindowPoSt, each of which serve a different purpose.

  • WinningPoSt is used to prove that the miner has a replica of the data at the specific time when they were asked. A WinningPoSt challenge is issued to a miner only if the miner has been selected by (i.e., wins in) the Secret Leader Election algorithm (of Expected Consensus) to mine the next block. The answer to the WinningPoSt challenge has to be submitted within a short deadline, making it impossible for the miner to seal and find the answer to the challenge on demand. This guarantees that at the time of the challenge the miner maintains a copy of the data.
  • WindowPoSt is used as a proof that a copy of the data has been continuously maintained over time. This involves submitting proofs regularly (see details below) and makes it irrational for a miner to not keep a sealed copy of the data (i.e., it is more expensive to seal a copy of the data every time they are asked to submit a WindowPoSt challenge).

In summary, WinningPoSt guarantees that the miner maintains a copy of the data at some specific point in time (i.e., when chosen by the Expected Consensus algorithm to mine the next block), while WindowPoSt guarantees that the miner continuously maintains a copy of the data over time.

Constants & Terminology

Before continuing into more details of the WinnningPoSt and WindowPoSt algorithms, it is worth clarifying the following terms.

  • partition: a group of 2349 sectors proven simultaneously.
  • proving period: average period for proving all sectors maintained by a miner (currently set to 24 hours).
  • deadline: one of multiple points during a proving period when the proofs for some partitions are due.
  • challenge window: the period immediately before a deadline during which a challenge can be generated by the chain and the requisite proofs computed.
  • miner size: the amount of proven storage maintained by a single miner actor.

WinningPoSt

At the beginning of each epoch, a small number of storage miners are elected to mine new blocks, by Filecoin’s Expected Consensus algorithm. Recall that the Filecoin blockchain operates on the basis of tipsets, therefore multiple blocks can be mined at the same height.

Each of the miners that are elected to mine a block have to submit a proof that they keep a sealed copy of the data which they have included in their proposed block, before the end of the current epoch. Successful submission of this proof is the WinningPoSt, which in turn grants the miner the Filecoin Block Reward, as well as the opportunity to charge other nodes fees in order to include their messages in the block. If a miner misses the epoch-end deadline, then the miner misses the opportunity to mine a block and get a Block Reward. No penalty is incurred in this case.

Recall, that the probability of a storage miner being elected to mine a block is governed by Filecoin’s Expected Consensus algorithm and guarantees that miners will be chosen (on expectation) proportionally to their Quality Adjusted Power in the network, as reported in the power table WinningPoStSectorLookback epochs before the election.

WindowPoSt

WindowPoSt is the mechanism by which the commitments made by storage miners are audited. In WindowPoSt every 24-hour period is called a “proving period” and is broken down into a series of 30min, non-overlapping deadlines, making a total of 48 deadlines within any given 24-hour proving period. Every miner must demonstrate availability of all claimed sectors on a 24hr basis. Constraints on individual proof computations limit a single proof to 2349 sectors (a partition), with 10 challenges each.

In particular, the sectors that a miner has pledged to store are: i) assigned to deadlines, and ii) grouped in partitions. It is important to highlight that although sectors are assigned to deadlines, sectors are proven in partitions - not individually. In other words, upon every deadline, a miner has to prove a whole partition.

For each partition, the miner will have to produce a SNARK-compressed proof and publish it to the blockchain as a message in a block. This proves that the miner has indeed stored the pledged sector. In this way, every sector of pledged storage is audited (as part of the partition it belongs to) at least once in any 24-hour period, and a permanent, verifiable, and public record attesting to each storage miner’s continued commitment is kept.

It naturally follows that the more the sectors a miner has pledged to store, the more the partitions of sectors that the miner will need to prove per deadline. This requires ready access to sealed copies of each of the challenged sectors and makes it irrational for the miner to seal data every time they need to provide a WindowPoSt proof.

The Filecoin network expects constant availability of stored files. Failing to submit WindowPoSt for a sector will result in a fault, and the storage miner supplying the sector will be slashed – that is, a portion of their pledge collateral will be forfeited, and their storage power will be reduced (see Storage Power Consensus.

Design

Each miner actor is allocated a 24-hr proving period at random upon creation. This proving period is divided into 48 non-overlapping half-hour deadlines. Each sector is assigned to one of these deadlines when proven to the chain, i.e., when ProveCommit completes and never changes deadline. The sets of sectors due at each deadline is recorded in a collection of 48 bitfields.

Generally, sectors are first allocated to fill any deadline up to the next whole-partition multiple of (2349) sectors; next a new partition is started on the deadline with the fewest partitions. If all deadlines have the same number of sectors, a new partition is opened at deadline 0.

The per-deadline sector sets are set at the beginning of each proving period as proving set bitfields and never change. The sector IDs are then (logically) divided sequentially into partitions, and the partitions across all deadlines for the miner logically numbered sequentially. Thus, a sector may move between partitions at the same deadline as other sectors fault or expire, but never changes deadline.

If a miner adds 48 partitions worth of sectors (~3.8 PiB), they will have one partition due for each deadline. When a miner has more than 48 partitions, some deadlines will have multiple partitions due at the same deadline. The proofs (i.e., one SNARK proof per partition) for these simultaneous partitions are expected to be computed and submitted together in a single message, at least up to 10-20 partitions per message, but can be split arbitrarily between messages (which, however, will cost more gas).

A WindowPoSt proof submission must indicate which deadline it targets and which partition indices the proofs represent for that particular deadline. The actor code receiving a submission maps the partition numbers through the deadline’s proving set bitfields to obtain the sector numbers. Faulty sectors are masked from the proving set by substituting a non-faulty sector number. The actor records successful proof verification for each of the partitions in a bitfield of partition indices (or records nothing if verification fails).

There are currently three types of Faults, the Declared Fault, the Detected Fault and the Skipped Fault. They are discussed in more detail as part of the Storage Mining subystem.

Summarising:

  • A miner maintains its sectors active by generating Proofs-of-Spacetime (PoSt) and submit miner.SubmitWindowedPoSt for their sectors in a timely manner.
  • A WindowPoSt proves that sectors are persistently stored through time.
  • Each miner proves all of its sectors once per proving period; each sector must be proven by a particular time called deadline.
  • A proving period is a period of WPoStProvingPeriod epochs in which a Miner actor is scheduled to prove its storage.
  • A proving period is evenly divided in WPoStPeriodDeadlines deadlines.
  • Each miner has a different start of proving period ProvingPeriodStart that is assigned at Power.CreateMiner.
  • A deadline is a period of WPoStChallengeWindow epochs that divides a proving period.
  • Sectors are assigned to a deadline at ProveCommit, either a call to miner.ProveCommitSector or miner.ProveCommitAggregate, and will remain assigned to it throughout their lifetime.
  • In order to prove that they continuously store a sector, a miner must submit a miner.SubmitWindowedPoSt for each deadline.
  • Sectors are assigned to partitions. A partition is a set of sectors that is not larger than the Seal Proof allowed number of sectors sp.WindowPoStPartitionSectors.
  • Sectors are assigned to a partition at ProveCommit, through a call to miner.ProveCommitSector or miner.ProveCommitAggregate, and they can be re-arranged via CompactPartitions.
  • Partitions are a by-product of our current proof mechanism. There is a limit in the number of sectors (sp.WindowPoStPartitionSectors) that can be proven in a single SNARK proof. If more than this amount is required to be proven, more than one SNARK proof is required, given that each SNARK proof represents a partition.

There are four relevant epochs associated to a deadline, shown in the table below:

Name Distance from Open Description
Open 0 Epoch from which a PoSt Proof for this deadline can be submitted.
Close WPoStChallengeWindow Epoch after which a PoSt Proof for this deadline will be rejected.
FaultCutoff -FaultDeclarationCutoff Epoch after which a miner.DeclareFault and miner.DeclareFaultRecovered for sectors in the upcoming deadline are rejected.
Challenge -WPoStChallengeLookback Epoch at which the randomness for the challenges is available.
$$ \gdef\createporepbatch{\textsf{create\_porep\_batch}} \gdef\GrothProof{\textsf{Groth16Proof}} \gdef\Groth{\textsf{Groth16}} \gdef\GrothEvaluationKey{\textsf{Groth16EvaluationKey}} \gdef\GrothVerificationKey{\textsf{Groth16VerificationKey}} \gdef\creategrothproof{\textsf{create\_groth16\_proof}} \gdef\ParentLabels{\textsf{ParentLabels}} \gdef\or#1#2{\langle #1 | #2 \rangle} \gdef\porepreplicas{\textsf{porep\_replicas}} \gdef\postreplicas{\textsf{post\_replicas}} \gdef\winningpartitions{\textsf{winning\_partitions}} \gdef\windowpartitions{\textsf{window\_partitions}} \gdef\sector{\textsf{sector}} \gdef\lebitstolebytes{\textsf{le\_bits\_to\_le\_bytes}} \gdef\lebinrep#1{{\llcorner #1 \lrcorner_{\lower{2pt}{2, \textsf{le}}}}} \gdef\bebinrep#1{{\llcorner #1 \lrcorner_{\lower{2pt}{2, \textsf{be}}}}} \gdef\lebytesbinrep#1{{\llcorner #1 \lrcorner_{\lower{2pt}{2, \textsf{le-bytes}}}}} \gdef\feistelrounds{\textsf{feistel\_rounds}} \gdef\int{\textsf{int}} \gdef\lebytes{\textsf{le-bytes}} \gdef\lebytestolebits{\textsf{le\_bytes\_to\_le\_bits}} \gdef\letooctet{\textsf{le\_to\_octet}} \gdef\byte{\textsf{byte}} \gdef\postpartitions{\textsf{post\_partitions}} \gdef\PostReplica{\textsf{PostReplica}} \gdef\PostReplicas{\textsf{PostReplicas}} \gdef\PostPartitionProof{\textsf{PostPartitionProof}} \gdef\PostReplicaProof{\textsf{PostReplicaProof}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\pad{\textsf{pad}} \gdef\octettole{\textsf{octet\_to\_le}} \gdef\packed{\textsf{packed}} \gdef\val{\textsf{val}} \gdef\bits{\textsf{bits}} \gdef\partitions{\textsf{partitions}} \gdef\Batch{\textsf{Batch}} \gdef\batch{\textsf{batch}} \gdef\postbatch{\textsf{post\_batch}} \gdef\postchallenges{\textsf{post\_challenges}} \gdef\Nonce{\textsf{Nonce}} \gdef\createvanillaporepproof{\textsf{create\_vanilla\_porep\_proof}} \gdef\PorepVersion{\textsf{PorepVersion}} \gdef\bedecode{\textsf{be\_decode}} \gdef\OR{\mathbin{|}} \gdef\indexbits{\textsf{index\_bits}} \gdef\nor{\textsf{nor}} \gdef\and{\textsf{and}} \gdef\norgadget{\textsf{nor\_gadget}} \gdef\andgadget{\textsf{and\_gadget}} \gdef\el{\textsf{el}} \gdef\arr{\textsf{arr}} \gdef\pickgadget{\textsf{pick\_gadget}} \gdef\pick{\textsf{pick}} \gdef\int{\textsf{int}} \gdef\x{\textsf{x}} \gdef\y{\textsf{y}} \gdef\aap{{\langle \auxb | \pubb \rangle}} \gdef\aapc{{\langle \auxb | \pubb | \constb \rangle}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\parentlabelsbits{\textsf{parent\_labels\_bits}} \gdef\label{\textsf{label}} \gdef\layerbits{\textsf{layer\_bits}} \gdef\labelbits{\textsf{label\_bits}} \gdef\digestbits{\textsf{digest\_bits}} \gdef\node{\textsf{node}} \gdef\layerindex{\textsf{layer\_index}} \gdef\be{\textsf{be}} \gdef\octet{\textsf{octet}} \gdef\reverse{\textsf{reverse}} \gdef\LSBit{\textsf{LSBit}} \gdef\MSBit{\textsf{MSBit}} \gdef\LSByte{\textsf{LSByte}} \gdef\MSByte{\textsf{MSByte}} \gdef\PorepPartitionProof{\textsf{PorepPartitionProof}} \gdef\PostPartitionProof{\textsf{PostPartitionProof}} \gdef\lebinrep#1{{\llcorner #1 \lrcorner_{\lower{2pt}{2, \textsf{le}}}}} \gdef\bebinrep#1{{\llcorner #1 \lrcorner_{\lower{2pt}{2, \textsf{be}}}}} \gdef\octetbinrep#1{{\llcorner #1 \lrcorner_{\lower{2pt}{2, \textsf{octet}}}}} \gdef\fieldelement{\textsf{field\_element}} \gdef\Fqsafe{{\mathbb{F}_{q, \safe}}} \gdef\elem{\textsf{elem}} \gdef\challenge{\textsf{challenge}} \gdef\challengeindex{\textsf{challenge\_index}} \gdef\uniquechallengeindex{\textsf{unique\_challenge\_index}} \gdef\replicaindex{\textsf{replica\_index}} \gdef\uniquereplicaindex{\textsf{unique\_replica\_index}} \gdef\nreplicas{\textsf{n\_replicas}} \gdef\unique{\textsf{unique}} \gdef\R{\mathcal{R}} \gdef\getpostchallenge{\textsf{get\_post\_challenge}} \gdef\verifyvanillapostproof{\textsf{verify\_vanilla\_post\_proof}} \gdef\BinPathElement{\textsf{BinPathElement}} \gdef\BinTreeDepth{\textsf{BinTreeDepth}} \gdef\BinTree{\textsf{BinTree}} \gdef\BinTreeProof{\textsf{BinTreeProof}} \gdef\bintreeproofisvalid{\textsf{bintree\_proof\_is\_valid}} \gdef\Bit{{\{0, 1\}}} \gdef\Byte{\mathbb{B}} \gdef\calculatebintreechallenge{\textsf{calculate\_bintree\_challenge}} \gdef\calculateocttreechallenge{\textsf{calculate\_octtree\_challenge}} \gdef\depth{\textsf{depth}} \gdef\dot{\textsf{.}} \gdef\for{\textsf{for }} \gdef\Function{\textbf{Function: }} \gdef\Fq{{\mathbb{F}_q}} \gdef\leaf{\textsf{leaf}} \gdef\line#1#2#3{\scriptsize{\textsf{#1.}#2}\ \normalsize{#3}} \gdef\missing{\textsf{missing}} \gdef\NodeIndex{\textsf{NodeIndex}} \gdef\nodes{\textsf{nodes}} \gdef\OctPathElement{\textsf{OctPathElement}} \gdef\OctTree{\textsf{OctTree}} \gdef\OctTreeDepth{\textsf{OctTreeDepth}} \gdef\OctTreeProof{\textsf{OctTreeProof}} \gdef\octtreeproofisvalid{\textsf{octtree\_proof\_is\_valid}} \gdef\path{\textsf{path}} \gdef\pathelem{\textsf{path\_elem}} \gdef\return{\textsf{return }} \gdef\root{\textsf{root}} \gdef\Safe{{\Byte^{[32]}_\textsf{safe}}} \gdef\sibling{\textsf{sibling}} \gdef\siblings{\textsf{siblings}} \gdef\struct{\textsf{struct }} \gdef\Teq{\underset{{\small \mathbb{T}}}{=}} \gdef\Tequiv{\underset{{\small \mathbb{T}}}{\equiv}} \gdef\thin{{\thinspace}} \gdef\AND{\mathbin{\&}} \gdef\MOD{\mathbin{\%}} \gdef\createproof{{\textsf{create\_proof}}} \gdef\layer{\textsf{layer}} \gdef\nodeindex{\textsf{node\_index}} \gdef\childindex{\textsf{child\_index}} \gdef\push{\textsf{push}} \gdef\index{\textsf{index}} \gdef\leaves{\textsf{leaves}} \gdef\len{\textsf{len}} \gdef\ColumnProof{\textsf{ColumnProof}} \gdef\concat{\mathbin{\|}} \gdef\inputs{\textsf{inputs}} \gdef\Poseidon{\textsf{Poseidon}} \gdef\bi{\ \ } \gdef\Bool{{\{\textsf{True}, \textsf{False}\}}} \gdef\curr{\textsf{curr}} \gdef\if{\textsf{if }} \gdef\else{\textsf{else}} \gdef\proof{\textsf{proof}} \gdef\Sha#1{\textsf{Sha#1}} \gdef\ldotdot{{\ldotp\ldotp}} \gdef\as{\textsf{ as }} \gdef\bintreerootgadget{\textsf{bintree\_root\_gadget}} \gdef\octtreerootgadget{\textsf{octtree\_root\_gadget}} \gdef\cs{\textsf{cs}} \gdef\RCS{\textsf{R1CS}} \gdef\pathbits{\textsf{path\_bits}} \gdef\missingbit{\textsf{missing\_bit}} \gdef\missingbits{\textsf{missing\_bits}} \gdef\pubb{\textbf{pub}} \gdef\privb{\textbf{priv}} \gdef\auxb{\textbf{aux}} \gdef\constb{\textbf{const}} \gdef\CircuitVal{\textsf{CircuitVal}} \gdef\CircuitBit{{\textsf{CircuitVal}_\Bit}} \gdef\Le{\textsf{le}} \gdef\privateinput{\textsf{private\_input}} \gdef\publicinput{\textsf{public\_input}} \gdef\deq{\mathbin{\overset{\diamond}{=}}} \gdef\alloc{\textsf{alloc}} \gdef\insertgadget#1{\textsf{insert\_#1\_gadget}} \gdef\block{\textsf{block}} \gdef\shagadget#1#2{\textsf{sha#1\_#2\_gadget}} \gdef\poseidongadget#1{\textsf{poseidon\_#1\_gadget}} \gdef\refeq{\mathbin{\overset{{\small \&}}=}} \gdef\ptreq{\mathbin{\overset{{\small \&}}=}} \gdef\bit{\textsf{bit}} \gdef\extend{\textsf{extend}} \gdef\auxle{{[\textbf{aux}, \textsf{le}]}} \gdef\SpecificNotation{{\underline{\text{Specific Notation}}}} \gdef\repeat{\textsf{repeat}} \gdef\preimage{\textsf{preimage}} \gdef\digest{\textsf{digest}} \gdef\digestbytes{\textsf{digest\_bytes}} \gdef\digestint{\textsf{digest\_int}} \gdef\leencode{\textsf{le\_encode}} \gdef\ledecode{\textsf{le\_decode}} \gdef\ReplicaID{\textsf{ReplicaID}} \gdef\replicaid{\textsf{replica\_id}} \gdef\replicaidbits{\textsf{replica\_id\_bits}} \gdef\replicaidblock{\textsf{replica\_id\_block}} \gdef\cc{\textsf{::}} \gdef\new{\textsf{new}} \gdef\lebitsgadget{\textsf{le\_bits\_gadget}} \gdef\CircuitBitOrConst{{\textsf{CircuitValOrConst}_\Bit}} \gdef\createporepcircuit{\textsf{create\_porep\_circuit}} \gdef\CommD{\textsf{CommD}} \gdef\CommC{\textsf{CommC}} \gdef\CommR{\textsf{CommR}} \gdef\CommCR{\textsf{CommCR}} \gdef\commd{\textsf{comm\_d}} \gdef\commc{\textsf{comm\_c}} \gdef\commr{\textsf{comm\_r}} \gdef\commcr{\textsf{comm\_cr}} \gdef\assert{\textsf{assert}} \gdef\asserteq{\textsf{assert\_eq}} \gdef\TreeDProof{\textsf{TreeDProof}} \gdef\TreeRProof{\textsf{TreeRProof}} \gdef\TreeR{\textsf{TreeR}} \gdef\ParentColumnProofs{\textsf{ParentColumnProofs}} \gdef\challengebits{\textsf{challenge\_bits}} \gdef\packedchallenge{\textsf{packed\_challenge}} \gdef\PartitionProof{\textsf{PartitionProof}} \gdef\u#1{\textsf{u#1}} \gdef\packbitsasinputgadget{\textsf{pack\_bits\_as\_input\_gadget}} \gdef\treedleaf{\textsf{tree\_d\_leaf}} \gdef\treerleaf{\textsf{tree\_r\_leaf}} \gdef\calculatedtreedroot{\textsf{calculated\_tree\_d\_root}} \gdef\calculatedtreerleaf{\textsf{calculated\_tree\_r\_leaf}} \gdef\calculatedcommd{\textsf{calculated\_comm\_d}} \gdef\calculatedcommc{\textsf{calculated\_comm\_c}} \gdef\calculatedcommr{\textsf{calculated\_comm\_r}} \gdef\calculatedcommcr{\textsf{calculated\_comm\_cr}} \gdef\layers{\textsf{layers}} \gdef\total{\textsf{total}} \gdef\column{\textsf{column}} \gdef\parentcolumns{\textsf{parent\_columns}} \gdef\columns{\textsf{columns}} \gdef\parentlabel{\textsf{parent\_label}} \gdef\label{\textsf{label}} \gdef\calculatedtreecleaf{\textsf{calculated\_tree\_c\_leaf}} \gdef\calculatedcolumn{\textsf{calculated\_column}} \gdef\parentlabels{\textsf{parent\_labels}} \gdef\drg{\textsf{drg}} \gdef\exp{\textsf{exp}} \gdef\parentlabelbits{\textsf{parent\_label\_bits}} \gdef\parentlabelblock{\textsf{parent\_label\_block}} \gdef\Bits{\textsf{ Bits}} \gdef\safe{\textsf{safe}} \gdef\calculatedlabel{\textsf{calculated\_label}} \gdef\createlabelgadget{\textsf{create\_label\_gadget}} \gdef\encodingkey{\textsf{encoding\_key}} \gdef\encodegadget{\textsf{encode\_gadget}} \gdef\TreeC{\textsf{TreeC}} \gdef\value{\textsf{value}} \gdef\encoded{\textsf{encoded}} \gdef\unencoded{\textsf{unencoded}} \gdef\key{\textsf{key}} \gdef\lc{\textsf{lc}} \gdef\LC{\textsf{LC}} \gdef\LinearCombination{\textsf{LinearCombination}} \gdef\one{\textsf{one}} \gdef\constraint{\textsf{constraint}} \gdef\proofs{\textsf{proofs}} \gdef\merkleproofs{\textsf{merkle\_proofs}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\challenges{\textsf{challenges}} \gdef\pub{\textsf{pub}} \gdef\priv{\textsf{priv}} \gdef\last{\textsf{last}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\post{\textsf{post}} \gdef\SectorID{\textsf{SectorID}} \gdef\winning{\textsf{winning}} \gdef\window{\textsf{window}} \gdef\Replicas{\textsf{Replicas}} \gdef\P{\mathcal{P}} \gdef\V{\mathcal{V}} \gdef\ww{{\textsf{winning}|\textsf{window}}} \gdef\replicasperk{{\textsf{replicas}/k}} \gdef\replicas{\textsf{replicas}} \gdef\Replica{\textsf{Replica}} \gdef\createvanillapostproof{\textsf{create\_vanilla\_post\_proof}} \gdef\createpostcircuit{\textsf{create\_post\_circuit}} \gdef\ReplicaProof{\textsf{ReplicaProof}} \gdef\aww{{\langle \ww \rangle}} \gdef\partitionproof{\textsf{partition\_proof}} \gdef\replicas{\textsf{replicas}} \gdef\getdrgparents{\textsf{get\_drg\_parents}} \gdef\getexpparents{\textsf{get\_exp\_parents}} \gdef\DrgSeed{\textsf{DrgSeed}} \gdef\DrgSeedPrefix{\textsf{DrgSeedPrefix}} \gdef\FeistelKeysBytes{\textsf{FeistelKeysBytes}} \gdef\porep{\textsf{porep}} \gdef\rng{\textsf{rng}} \gdef\ChaCha#1{\textsf{ChaCha#1}} \gdef\cc{\textsf{::}} \gdef\fromseed{\textsf{from\_seed}} \gdef\buckets{\textsf{buckets}} \gdef\meta{\textsf{meta}} \gdef\dist{\textsf{dist}} \gdef\each{\textsf{each}} \gdef\PorepID{\textsf{PorepID}} \gdef\porepgraphseed{\textsf{porep\_graph\_seed}} \gdef\utf{\textsf{utf8}} \gdef\DrgStringID{\textsf{DrgStringID}} \gdef\FeistelStringID{\textsf{FeistelStringID}} \gdef\graphid{\textsf{graph\_id}} \gdef\createfeistelkeys{\textsf{create\_feistel\_keys}} \gdef\FeistelKeys{\textsf{FeistelKeys}} \gdef\feistelrounds{\textsf{feistel\_rounds}} \gdef\feistel{\textsf{feistel}} \gdef\ExpEdgeIndex{\textsf{ExpEdgeIndex}} \gdef\loop{\textsf{loop}} \gdef\right{\textsf{right}} \gdef\left{\textsf{left}} \gdef\mask{\textsf{mask}} \gdef\RightMask{\textsf{RightMask}} \gdef\LeftMask{\textsf{LeftMask}} \gdef\roundkey{\textsf{round\_key}} \gdef\beencode{\textsf{be\_encode}} \gdef\Blake{\textsf{Blake2b}} \gdef\input{\textsf{input}} \gdef\output{\textsf{output}} \gdef\while{\textsf{while }} \gdef\digestright{\textsf{digest\_right}} \gdef\xor{\mathbin{\oplus_\text{xor}}} \gdef\Edges{\textsf{ Edges}} \gdef\edge{\textsf{edge}} \gdef\expedge{\textsf{exp\_edge}} \gdef\expedges{\textsf{exp\_edges}} \gdef\createlabel{\textsf{create\_label}} \gdef\Label{\textsf{Label}} \gdef\Column{\textsf{Column}} \gdef\Columns{\textsf{Columns}} \gdef\ParentColumns{\textsf{ParentColumns}} % `\tern` should be written as % \gdef\tern#1?#2:#3{#1\ \text{?}\ #2 \ \text{:}\ #3} % but that's not possible due to https://github.com/KaTeX/KaTeX/issues/2288 \gdef\tern#1#2#3{#1\ \text{?}\ #2 \ \text{:}\ #3} \gdef\repeattolength{\textsf{repeat\_to\_length}} \gdef\verifyvanillaporepproof{\textsf{verify\_vanilla\_porep\_proof}} \gdef\poreppartitions{\textsf{porep\_partitions}} \gdef\challengeindex{\textsf{challenge\_index}} \gdef\porepbatch{\textsf{porep\_batch}} \gdef\winningchallenges{\textsf{winning\_challenges}} \gdef\windowchallenges{\textsf{window\_challenges}} \gdef\PorepPartitionProof{\textsf{PorepPartitionProof}} \gdef\TreeD{\textsf{TreeD}} \gdef\TreeCProof{\textsf{TreeCProof}} \gdef\Labels{\textsf{Labels}} \gdef\porepchallenges{\textsf{porep\_challenges}} \gdef\postchallenges{\textsf{post\_challenges}} \gdef\PorepChallengeSeed{\textsf{PorepChallengeSeed}} \gdef\getporepchallenges{\textsf{get\_porep\_challenges}} \gdef\getallparents{\textsf{get\_all\_parents}} \gdef\PorepChallengeProof{\textsf{PorepChallengeProof}} \gdef\challengeproof{\textsf{challenge\_proof}} \gdef\PorepChallenges{\textsf{PorepChallenges}} \gdef\replicate{\textsf{replicate}} \gdef\createreplicaid{\textsf{create\_replica\_id}} \gdef\ProverID{\textsf{ProverID}} \gdef\replicaid{\textsf{replica\_id}} \gdef\generatelabels{\textsf{generate\_labels}} \gdef\labelwithdrgparents{\textsf{label\_with\_drg\_parents}} \gdef\labelwithallparents{\textsf{label\_with\_all\_parents}} \gdef\createtreecfromlabels{\textsf{create\_tree\_c\_from\_labels}} \gdef\ColumnDigest{\textsf{ColumnDigest}} \gdef\encode{\textsf{encode}} $$

Stacked DRG Proof of Replication

Merkle Proofs

Implementation:

Additional Notation:

$\index_l: [\lfloor N_\nodes / 2^l \rfloor] \equiv [\len(\BinTree\dot\layer_l)]$
The index of a node in a $\BinTree$ layer $l$. The leftmost node in a tree has $\index_l = 0$. For each tree layer $l$ (excluding the root layer) a Merkle proof verifier calculates the label of the node at $\index_l$ from a single Merkle proof path element $\BinTreeProof_c\dot\path[l - 1] \thin$.

BinTreeProofs

The method $\BinTree\dot\createproof$ is used to create a Merkle proof for a challenge node $c$.

$\overline{\underline{\Function \BinTree\dot\createproof(c: \NodeIndex) \rightarrow \BinTreeProof_c}}$
$\line{1}{\bi}{\leaf: \Safe = \BinTree\dot\leaves[c]}$
$\line{2}{\bi}{\root: \Safe = \BinTree\dot\root}$

$\line{3}{\bi}{\path: \BinPathElement^{[\BinTreeDepth]}= [\ ]}$
$\line{4}{\bi}{\for l \in [\BinTreeDepth]:}$
$\line{5}{\bi}{\quad \index_l: [\len(\BinTree\dot\layer_l)] = c \gg l}$
$\line{6}{\bi}{\quad \missing: \Bit = \index_l \AND 1}$
$\line{7}{\bi}{\quad \sibling: \Safe = \if \missing = 0:}$
$\quad\quad\quad \BinTree\dot\layer_l[\index_l + 1]$
$\quad\quad\thin \else:$
$\quad\quad\quad \BinTree\dot\layer_l[\index_l - 1]$
$\line{8}{\bi}{\quad \path\dot\push(\BinPathElement \thin \{\ \sibling, \thin \missing\ \} \thin )}$

$\line{9}{\bi}{\return \BinTreeProof_c \thin \{\ \leaf, \thin \root, \thin \path\ \}}$

Code Comments:

  • Line 5: Calculates the node index in layer $l$ of the node that the verifier calculated using the previous lath element (or the $\BinTreeProof_c\dot\leaf$ if $l = 0$). Note that $c \gg l \equiv \lfloor c / 2^l \rfloor \thin$.

OctTreeProofs

The method $\OctTree\dot\createproof$ is used to create a Merkle proof for a challenge node $c$.

Additional Notation:

$\index_l: [\lfloor N_\nodes / 8^l \rfloor] \equiv [\len(\OctTree\dot\layer_l)]$
The index of a node in an $\OctTree$ layer $l$. The leftmost node in a tree has $\index_l = 0$. For each tree layer $l$ (excluding the root layer) a Merkle proof verifier calculates the label of the node at $\index_l$ from a single Merkle proof path element $\OctTreeProof_c\dot\path[l - 1] \thin$.

$\textsf{first\_sibling}_l \thin, \textsf{last\_sibling}: [\lfloor N_\nodes / 8^l \rfloor]$
The node indexes in tree layer $l$ of the first and last nodes in this layer’s Merkle path element’s siblings array $\OctTreeProof_c\dot\path[l]\dot\siblings \thin$.

$\overline{\underline{\Function \OctTree\dot\createproof(c: \NodeIndex) \rightarrow \OctTreeProof_c}}$
$\line{1}{\bi}{\leaf: \Fq = \OctTree\dot\leaves[c]}$
$\line{2}{\bi}{\root: \Fq = \OctTree\dot\root}$

$\line{3}{\bi}{\path: \OctPathElement^{[\OctTreeDepth]}= [\ ]}$
$\line{4}{\bi}{\for l \in [\OctTreeDepth]:}$
$\line{5}{\bi}{\quad \index_l: [\len(\OctTree\dot\layer_l)] = c \gg (3 * l)}$
$\line{6}{\bi}{\quad \missing: [8] = \index_l \MOD 8}$

$\line{7}{\bi}{\quad \textsf{first\_sibling}_l = \index_l - \missing}$
$\line{8}{\bi}{\quad \textsf{last\_sibling}_l = \index_l + (7 - \missing)}$
$\line{9}{\bi}{\quad \siblings: \Fq^{[7]} =}$
$\quad\quad\quad \OctTree\dot\layer_l[\textsf{first\_sibling}_l \mathbin{\ldotdot} \index_l]$
$\quad\quad\quad \|\ \OctTree\dot\layer_l[\index_l + 1 \mathbin{\ldotdot} \textsf{last\_sibling}_l + 1]$

$\line{10}{}{\quad \path\dot\push(\OctPathElement \thin \{\ \siblings, \thin \missing\ \} \thin )}$
$\line{11}{}{\return \OctTreeProof_c \thin \{\ \leaf, \thin \root, \thin \path\ \}}$

Code Comments:

  • Line 5: Calculates the node index in layer $l$ of the node that the verifier calculated themselves using the previous path element (or $\OctTreeProof_c\dot\leaf$ if $l = 0$). Note that $c \gg (3 * l) \equiv \lfloor c / 8^l \rfloor \thin$.
  • Line 7-8: Calculates the indexes in tree layer $l$ of the first and last (both inclusive) Merkle hash inputs for layer $l$’s path element.
  • Line 9: Copies the 7 Merkle hash inputs that will be in layer $l$’s path element $\OctTreeProof\dot\path[l]\dot\siblings \thin$.

Proof Root Validation

The functions $\bintreeproofisvalid$ and $\octtreeproofisvalid$ are used to verify that a $\BinTreeProof\dot\path$ or an $\OctTreeProof\dot\path$ hash to the root found in the Merkle proof $\BinTreeProof\dot\root$ and $\OctTreeProof\dot\root$ respectively.

Note that these functions do not verify that a $\BinTreeProof\dot\path$ or an $\OctTreeProof\dot\path$ correspond to the expected Merkle challenge $c$. To verify that a proof path is consistent with $c$, see the psuedocode functions $\calculatebintreechallenge$ and $\calculateocttreechallenge$.

Implementation:

$\overline{\underline{\Function \bintreeproofisvalid(\proof: \BinTreeProof) \rightarrow \Bool}\thin}$
$\line{1}{\bi}{\curr: \Safe = \proof\dot\leaf}$
$\line{2}{\bi}{\for \sibling, \missing \in \proof\dot\path:}$
$\line{3}{\bi}{\quad \curr: \Safe = \if \missing = 0:}$
$\quad\quad\quad\quad \Sha{254}_2([\curr, \sibling])$
$\quad\quad\thin \else:$
$\quad\quad\quad\quad \Sha{254}_2([\sibling, \curr])$
$\line{4}{\bi}{\return \curr = \proof\dot\root}$

The function $\octtreeproofisvalid$ can receive as the type of its $\proof$ argument either an $\OctTreeProof$ or $\ColumnProof$ (a $\ColumnProof$ is just an $\OctTreeProof$ with an adjoined field $\column$, $\ColumnProof_c \equiv \OctTreeProof_c \cup \column_c \thin$).

$\overline{\underline{\Function \octtreeproofisvalid(\proof: \OctTreeProof) \rightarrow \Bool}\thin}$
$\line{1}{\bi}{\curr: \Fq = \proof\dot\leaf}$
$\line{2}{\bi}{\for \siblings, \missing \in \proof\dot\path:}$
$\line{3}{\bi}{\quad \inputs: \Fq^{[8]} = \siblings[\ldotdot \missing] \concat \curr \concat \siblings[\missing \ldotdot]}$
$\line{4}{\bi}{\quad \curr = \Poseidon_8(\inputs)}$
$\line{5}{\bi}{\return \curr = \proof\dot\root}$
$\overline{\underline{\Function \octtreeproofisvalid(\proof: \ColumnProof) \rightarrow \Bool}\thin}$
$\line{1}{\bi}{\return \octtreeproofisvalid(\OctTreeProof\ \{\ \leaf, \root, \path \Leftarrow \ColumnProof\ \})}$

Merkle Proof Challenge Validation

Given a Merkle path $\path$ in a $\BinTree$ or $\OctTree$, $\calculatebintreechallenge$ and $\calculateocttreechallenge$ calculate the Merkle challenge $c$ for which the Merkle proof $\path$ was generated.

Given a Merkle challenge $c$ and its path in a $\BinTree$ or $\OctTree$, the concatentation of the $\missing$ bits (or octal digits) in the Merkle path is the little-endian binary (or octal) representation of the integer $c \thin$:

$\line{1}{\bi}{c: \NodeIndex = \langle \text{challenge} \rangle}$
$\line{2}{\bi}{\BinTreeProof_c = \BinTree\dot\createproof(c)}$
$\line{3}{\bi}{\OctTreeProof_c = \OctTree\dot\createproof(c)}$

$\line{4}{\bi}{\llcorner c \lrcorner_{2, \Le}: \Bit^{[\hspace{1pt} \log_2(N_\nodes) \hspace{1pt}]} = \big\|_{\pathelem \hspace{1pt} \in \hspace{1pt} \BinTreeProof_c\dot\path} \thin \pathelem\dot\missing}$

$\line{5}{\bi}{\mathrlap{\llcorner c \lrcorner_{8, \Le}: [8]^{[\hspace{1pt} \log_8(N_\nodes) \hspace{1pt}]}}\hphantom{\llcorner c \lrcorner_{2, \Le}: \Bit^{[\hspace{1pt} \log_2(N_\nodes) \hspace{1pt}]}} = \big\|_{\pathelem \hspace{1pt} \in \hspace{1pt} \BinTreeProof_c\dot\path} \thin \pathelem\dot\missing}$

$\line{6}{\bi}{\mathrlap{\llcorner c \lrcorner_{2, \Le}: \Bit^{[\hspace{1pt} \log_2(N_\nodes) \hspace{1pt}]}}\hphantom{\llcorner c \lrcorner_{2, \Le}: \Bit^{[\hspace{1pt} \log_2(N_\nodes) \hspace{1pt}]}} = \big\|_{\pathelem \hspace{1pt} \in \hspace{1pt} \OctTreeProof_c\dot\path} \thin \llcorner \pathelem\dot\missing \lrcorner_{2, \Le}}$

Implementation: storage_proofs::merkle::MerkleProofTrait::path_index()

Additional Notation:

$\path = \BinTreeProof\dot\path$
$\path = \OctTreeProof\dot\path$
The $\path$ argument is the path field of a $\BinTreeProof$ or $\OctTreeProof$.

$c: \NodeIndex$
The challenge corresponding to $\path$.

$l \in [\BinTreeDepth]$
$l \in [\OctTreeDepth]$
A path element’s layer in a Merkle tree (the layer in the tree that contains the path elements $\siblings$). Layer $l = 0$ is the leaves layer of the tree. Here, values for $l$ do not include the root layer $l \neq \BinTreeDepth, \OctTreeDepth \thin$.

$\overline{\underline{\Function \calculatebintreechallenge(\path: \BinPathElement^{[\BinTreeDepth]}) \rightarrow c}}$
$\line{1}{\bi}{\return \sum_{l \in [\BinTreeDepth]}{\path[l]\dot\missing * 2^l}}$
$\overline{\underline{\Function \calculateocttreechallenge(\path: \OctPathElement^{[\OctTreeDepth]}) \rightarrow c}}$
$\line{1}{\bi}{\return \sum_{l \in [\OctTreeDepth]}{\path[l]\dot\missing * 8^l}}$

Stacked Depth Robust Graphs

Filecoin utilizes the topological properties of depth robust graphs (DRG’s) to build a sequential and regeneration resistant encoding scheme. We stack $N_\layers$ of DRG’s, each containing $N_\nodes$ nodes, on top of one another and connect each adjacent pair of DRG layers via the edges of bipartite expander. The source layer of each expander is the DRG at layer $l$ and the sink layer is the DRG at layer $l + 1$. The resulting graph is termed a Stacked-DRG.

For every node $v \in [N_\nodes]$ in the DRG at layer $l \in [N_\layers]$, we generate $d_\drg$ number of DRG parent for $v$ in layer $l$. DRG parents are generated using the Bucket Sampling algorithm. For every node $v$ in layers $l > 0$ we generate $d_\exp$ number of expander parents for $v$ where the parents are in layer $l - 1$. Expander parents are generated using a psuedorandom permutation (PRP) $\pi: [N_\nodes] \rightarrow [N_\nodes]$ which maps a node in the DRG layer $l$ to a node in the the DRG layer $l - 1$. The psudeorandom permutation is generated using a $N_\feistelrounds$-round Feistel network whose round function is the keyed hash function $\Blake$, where round keys are specified by the constant $\FeistelKeys$.

DRG

The function $\getdrgparents$ are used to generate a node $v$’s DRG parents in the Stacked-DRG layer $l_v \thin$. The set of DRG parents returned for $v$ is the same for all PoRep’s of the same PoRep version $\PorepID$.

Implementation: storage_proofs::core::drgraph::BucketGraph::parents()

Additional Notation:

$v, u: \NodeIndex$
DRG child and parent node indexes respectively. A DRG child and its parents are in the same Stacked-DRG layer $l$.

$\mathbf{u}_\drg: \NodeIndex^{[d_\drg]}$
The set of $v$’s DRG parents.

$v_\meta, u_\meta: [d_\meta * N_\nodes]$
The indexes of $v$ and $u$ in the metagraph.

$\rng_{\PorepID, v}$
The RNG used to sample $v$’s DRG parents. The RNG is seeded with the same bytes $\DrgSeed_{\PorepID, v}$ every time $\getdrgparents$ is called for node $v$ from a PoRep having version $\PorepID$.

$x \xleftarrow[\rng]{} S$
Samples $x$ uniformly from $S$ using the seeded $\rng$.

$b: [1, N_\buckets + 1]$
The Bucket Sampling bucket index. Bucket indexes start at 1.

$\dist_{\min, b}$
$\dist_{\max, b}$
The minimum and maximum parent distances in bucket $b$.

$\dist_{u_\meta}$
The distance $u_\meta$ is from $v_\meta$ in the metagraph.

$\overline{\underline{\Function \getdrgparents(v: \NodeIndex) \rightarrow \NodeIndex^{[d_\drg]}}}$
$\line{1}{\bi}{\if v \in \{0, 1\}:}$
$\line{2}{\bi}{\quad \return 0^{[d_\drg]}}$

$\line{3}{\bi}{\DrgSeed_{\PorepID, v}: \Byte^{[32]} = \DrgSeed_\PorepID \concat \leencode(v) \as \Byte^{[4]}}$
$\line{4}{\bi}{\rng_{\PorepID, v} = \ChaCha{8}\cc\fromseed(\DrgSeed_{\PorepID, v})}$

$\line{5}{\bi}{\mathbf{u}_\drg: \textsf{NodeIndex}^{[d_\drg]} = [\ ]}$

$\line{6}{\bi}{v_\meta = v * d_\textsf{meta}}$
$\line{7}{\bi}{N_\buckets = \lceil \log_2(v_\meta) \rceil}$
$\line{8}{\bi}{\for \each \in [d_\meta]:}$
$\line{9}{\bi}{\quad b \xleftarrow[\rng]{}  [1, N_\buckets + 1]}$
$\line{10}{}{\quad \dist_{\max, b} = \textsf{min}(v_\meta, 2^b)}$
$\line{11}{}{\quad \dist_{\min, b} = \textsf{max}(\dist_{\max, b} / 2, 2)}$
$\line{12}{}{\quad \dist_{u_\meta} \xleftarrow[\rng]{} [\dist_{\min, b} \thin, \dist_{\max, b}]}$
$\line{13}{}{\quad u_\meta = v_\meta - \dist_{u_\meta}}$
$\line{14}{}{\quad u: \NodeIndex = \lfloor u_\meta / d_\meta \rfloor}$
$\line{15}{}{\quad \mathbf{u}_\drg\dot\push(u)}$

$\line{16}{}{\mathbf{u}_\drg\dot\push(v - 1)}$
$\line{17}{}{\return \mathbf{u}_\drg}$

Expander

The function $\getexpparents$ is used to generate a node $v$’s expander parents in the Stacked-DRG layer $l_v - 1 \thin$. The set of expander parents returned for a node $v$ is the same for all PoRep’s of the same version $\PorepID$.

Implementation: storage_proofs::porep::stacked::vanilla::graph::StackedGraph::generate_expanded_parents()

Additional Notation:

$v, u: \NodeIndex$
Expander child and parent node indexes respectively. Each expander parent $u$ is in the Staked-DRG layer $l - 1$ that precedes the child node $v$’s layer $l$.

$\mathbf{u}_\exp: \NodeIndex^{[d_\exp]}$
The set of $v$’s expander parents.

$e_l \thin, e_{l - 1}: \ExpEdgeIndex$
The index of an expander edge in the child $v$’s layer $l$ and the parent $u$’s layer $l - 1$ respectively. An expander edge connects edge indexes $(e_{l - 1}, e_l)$ in adjacent Stacked-DRG layers.

$\overline{\underline{\Function \getexpparents(v: \NodeIndex) \rightarrow \NodeIndex^{[d_\exp]}}}$
$\line{1}{\bi}{\mathbf{u}_\exp: \NodeIndex^{[d_\exp]} = [\ ]}$
$\line{2}{\bi}{\for p\in [d_\exp]:}$
$\line{3}{\bi}{\quad e_l = v * d_\exp + p}$
$\line{4}{\bi}{\quad e_{l - 1} = \feistel(e_l)}$
$\line{5}{\bi}{\quad u: \NodeIndex = \lfloor e_{l - 1} / d_\exp \rfloor}$
$\line{6}{\bi}{\quad \mathbf{u}_\exp\dot\push(u)}$
$\line{7}{\bi}{\return \mathbf{u}_\exp}$

Feistel Network PRP

The function $\feistel$ runs an $N_\feistelrounds$ round Feistel network as a psuedorandom permutation (PRP) over the set of expander edges $[d_\exp * N_\nodes] = [2^{33}]$ in a Stacked-DRG layer.

Implementation: storage_proofs::core::crypto::feistel::permute()

Additional Notation:

$\input, \output$
The Feistel network’s input and output blocks respectively. The $\input$ argument and the returned $\output$ are guaranteed to be valid $\u{33}$ expander edge indexes, however their intermediate values may be $\u{34}$.

$\u{64}_{(17)}$
An unsigned 64-bit integer whose lowest 17 bits are utilized. The integer’s 17 least significant bits are used and may be $0$ or $1$, while the integer’s 47 most significant bits are $0$ and are not used.

$\left_r, \right_r$
The left and right halves of round $r$’s input block.

$\left_{r + 1}, \right_{r + 1}$
The left and right halves of the next round’s $r + 1$ input block.

$\FeistelKeys_\PorepID$
Is the set of constant round keys associated with the PoRep version $\PorepID$ that called $\feistel$.

$\key_r$
Round $r$’s key.

$\digestright$
The $\ell_\mask^\bit = 17$ least significant bits of a round’s Blake2b $\digest$.

$\overline{\underline{\Function \feistel(\input: \ExpEdgeIndex) \rightarrow \ExpEdgeIndex\ }\thin}$
$\line{1}{\bi}{\textsf{loop}:}$
$\line{2}{\bi}{\quad \right_r: \u{64}_{(17)} = \input \AND \mathsf{RightMask}}$
$\line{3}{\bi}{\quad \left_r: \u{64}_{(17)} = (\input \AND \LeftMask) \gg \ell_\mask^\bit}$

$\line{4}{\bi}{\quad \for \key_r \in \FeistelKeys_\PorepID:}$
$\line{5}{\bi}{\quad\quad \preimage: \Byte^{[16]} = \beencode(\right_r) \as \Byte^{[8]} \concat \beencode(\key_r) \as \Byte^{[8]}}$
$\line{6}{\bi}{\quad\quad \digest: \Byte^{[8]} = \Blake(\preimage)[..8]}$
$\line{7}{\bi}{\quad\quad \digest: \u{64} = \bedecode(\digest)}$
$\line{8}{\bi}{\quad\quad \digestright: \u{64}_{(17)} = \digest \AND \RightMask}$

$\line{9}{\bi}{\quad\quad \left_{r + 1}: \u{64}_{(17)} = \right_r}$
$\line{10}{}{\quad\quad \right_{r + 1}: \u{64}_{(17)} = \left_r \xor \digestright}$

$\line{11}{}{\quad\quad \left_r \thin, \right_r = \left_{r + 1} \thin, \right_{r + 1}}$

$\line{12}{}{\quad \output: \u{64}_{(34)} = (\left_r \ll \ell_\mask^\bit) \OR \right_r}$

$\line{13}{}{\quad \if \output \in [N_\expedges]:}$
$\line{14}{}{\quad\quad \return \output}$

$\line{15}{}{\quad \input: \u{64}_{(34)} = \output}$

Code Comments:

  • Line 1: Loops forever until the $\textsf{return}$ statement is reached (loops until $\output$ is a valid $\ExpEdgeIndex$).
  • Lines 13-14: Checks if $\output$ is a valid $\ExpEdgeIndex$ (true iff. the most-significant bit, the 34th bit, is 0), otherwise the Feistel network is rerun.
  • Line 15: Signifies that the next Feistel network’s input has it’s most-significant, its 34th bit, set to 1 (as opposed to the argument $\input: \ExpEdgeIndex \equiv \u{64}_{(33)}$, which does not have its 34th bit set).

All Parents

The function $\getallparents$ returns a node $v$’s set of DRG parents concatenated with its set of expander parents. The set of parents returned for $v$ is the same across all PoRep’s of the same PoRep version $\PorepID$.

$\overline{\underline{\Function \getallparents(v: \NodeIndex) \rightarrow \mathbf{u}_\total}}$
$\line{1}{\bi}{\return \getdrgparents(v) \concat \getexpparents(v)}$

Labeling

Labeling a Node

The labeling function for every node in a Stacked-DRG is $\Sha{254}$ producing a 254-bit field element $\Fqsafe$. A unique preimage is derived for each node-layer tuple $(v, l)$ in replica $\ReplicaID$’s Stacked-DRG.

The labeling preimage for the first node $v_0 = 0$ in every Stacked-DRG layer $l \in [N_\layers]$ for a replica $\ReplicaID$ is defined:

$\bi \preimage_{v_0, l}: \Byte^{[44]} =$
$\bi\quad\quad \ReplicaID \concat \beencode(l \as \u{32}) \as \Byte^{[4]} \concat \beencode(v_0 \as \u{64}) \as \Byte^{[8]}$

The labeling preimage for each node $v > 0$ in the first layer $l_0 = 0$ is defined:

$\bi \preimage_{v, l_0}: \Byte^{[1228]} =$
$\bi\quad\quad \ReplicaID$
$\bi\quad\quad \|\ \beencode(l_0 \as \u{32}) \as \Byte^{[4]}$
$\bi\quad\quad \|\ \beencode(v \as \u{64}) \as \Byte^{[8]}$
$\bi\quad\quad \big\|_{\Label_{u, l_0} \hspace{1pt} \in \hspace{1pt} \ParentLabels_{\mathbf{u}_\drg}^\star} \Label_{u, l_0} \as \Byte^{[32]} \vphantom{{|^|}^x}$

The labeling preimage for each node $v > 0$ in each layer $l > 0$ is defined:

$\bi \preimage_{v, l}: \Byte^{[1228]} =$
$\bi\quad\quad \ReplicaID$
$\bi\quad\quad \|\ \beencode(l \as \u{32}) \as \Byte^{[4]}$
$\bi\quad\quad \|\ \beencode(v \as \u{64}) \as \Byte^{[8]}$
$\bi\quad\quad \big\|_{\Label_u \hspace{1pt} \in \hspace{1pt} \ParentLabels_{\mathbf{u}_\total}^\star} \Label_u \as \Byte^{[32]} \vphantom{{|^|}^x}$

The Labels Matrix

The function $\textsf{generate\_labels}$ describes how every Stacked-DRG node is labeled for a replica. Nodes in the first layer $l_0 = 0$ are labeled using only DRG parents’ labels, nodes in every subsequent layers $l > 0$ are labeled using both their DRG and expander parents’ labels. The first node $v_0$ in every layer is not labeled using parents.

Additional Notation:

$\Labels_R$
Denotes that $\Labels$ is the labeling for a single replica $R$.

$l_0 = 0$
The constant $l_0$ is used to signify the first Stacked-DRG layer.

$l_v$
The Stacked-DRG layer in which a node $v$ resides.

$\Label_{v, l_v}$
The label of node $v$ in the Stacked-DRG layer $l_v$.

$u_{\langle \drg | \exp \rangle}$
Denotes that parent $u$ may be a DRG or expander parent.

$\Label_{u_\drg}$
The label of a DRG parent (in $v$’s layer $l$).

$\Label_u \equiv \or{\Label_{u_\drg, l_v}}{\Label_{u_\exp, l_v - 1}}$
The label of either a DRG or expander parent (in layer $l$ or $l - 1$ respectively).

$\overline{\underline{\Function \generatelabels(\ReplicaID) \rightarrow \Labels_R}}$
$\line{1}{\bi}{\Labels: {\Label^{[N_\nodes]}}^{[N_\layers]}
= [\ ]}$

$\line{2}{\bi}{\for v \in [N_\nodes]:}$
$\line{3}{\bi}{\quad \Labels[l_0][v] = \labelwithdrgparents(\ReplicaID, v, \Labels[l_0][..v])}$

$\line{4}{\bi}{\for l \in [1, N_\layers - 1]:}$
$\line{5}{\bi}{\quad \for v \in [N_\mathsf{nodes}]:}$
$\line{6}{\bi}{\quad\quad \Labels[l][v] = \labelwithallparents(\ReplicaID, l, v, \Labels[l][..v], \Labels[l - 1])}$

$\line{7}{\bi}{\return \Labels}$

Code Comments:

  • Lines 2-3: Label the first Stacked-DRG layer.
  • Lines 4-6: Label the remaining Stacked-DRG layers.

The function $\labelwithdrgparents$ is used to label a node $v$ in the first Stacked-DRG layer $l_0 = 0 \thin$.

The label of each node $v$ in $l_0$ is dependent on the labels of $v$’s DRG parents (where $v$’s DRG parents are in layer $l_v = l_0 \thin$. $v$’s DRG parents always come from the node range $[v]$ in layer $l_v$, thus we pass in the argument $\Labels[l_0][..v]$ which contains the label of every node up to $v$ in $l_0 \thin$. $\Labels[l_0][..v]$ is guaranteed to be labeled up to node $v$ because $\labelwithdrgparents$ is called sequentially for each node $v \in [N_\nodes] \thin$.

$\overline{\underline{\Function \labelwithdrgparents(\ReplicaID, v: \NodeIndex, \Labels[l_0][..v]) \rightarrow \Label_{v, l_0}}}$
$\line{1}{\bi}{\preimage: \Byte^{[*]} =}$
$\quad\quad \ReplicaID \concat \beencode(l_0 \as \u{32}) \as \Byte^{[4]} \concat \beencode(v \as \u{64}) \as \Byte^{[8]}$

$\line{2}{\bi}{\if v > 0:}$
$\line{3}{\bi}{\quad \mathbf{u}_\drg: \textsf{NodeIndex}^{[d_\drg]} = \getdrgparents(v)}$
$\line{4}{\bi}{\quad \for i \in [N_\parentlabels]:}$
$\line{5}{\bi}{\quad\quad u_\drg = \mathbf{u}_\drg[i \MOD d_\drg]}$
$\line{6}{\bi}{\quad\quad \Label_{u_\drg, l_0}: \Fqsafe = \Labels[l_0][u_\drg]}$
$\line{7}{\bi}{\quad\quad \preimage\dot\extend(\leencode(\Label_{u_\drg, l_0}) \as \Safe)}$

$\line{8}{\bi}{\return \Sha{254}(\preimage) \as \Fqsafe}$

The function $\labelwithallparents$ is used to label a node $v$ in all layers other than the first Stacked-DRG layer $l_v > 0 \thin$.

The label of a node $v$ in layers $l_v > 0$ is dependent on both the labels of $v$’s DRG and expander parents. $\labelwithallparents$ takes the argument $\Labels[l_v][..v]$ (the current layer $l_v$ being labeled, contains labels up to node $v$) to retrieve the labels of $v$’s DRG parents and the argument $\Labels[l_v - 1]$ (the previous layer’s labels) to retrieve the labels of $v$’s expander parents.

$\overline{\Function \labelwithallparents(\bi}$
$\quad \ReplicaID,$
$\quad l_v \in [1, N_\layers - 1],$
$\quad v: \NodeIndex,$
$\quad \Labels[l_v][..v],$
$\quad \Labels[l_v - 1],$
$\underline{) \rightarrow \Label_{v, l_v} \qquad\qquad\qquad\qquad\qquad\bi}$
$\line{1}{\bi}{\preimage: \Byte^{[*]} =}$
$\quad\quad \ReplicaID \concat \beencode(l_v \as \u{32}) \as \Byte^{[4]} \concat \beencode(v \as \u{64}) \as \Byte^{[8]}$

$\line{2}{\bi}{\if v > 0:}$
$\line{3}{\bi}{\quad \mathbf{u}_\total: \textsf{NodeIndex}^{[d_\total]} = \getallparents(v)}$
$\line{4}{\bi}{\quad \for i \in [N_\parentlabels]:}$
$\line{5}{\bi}{\quad\quad p = i \MOD d_\total}$
$\line{6}{\bi}{\quad\quad u_{\langle \drg | \exp \rangle} = \mathbf{u}_\total[p]}$
$\line{7}{\bi}{\quad\quad \Label_u: \Fqsafe = \if p < d_\drg}:$
$\quad\quad\quad\quad \Labels[l_v][u_\drg]$
$\quad\quad\quad \else:$
$\quad\quad\quad\quad \Labels[l_v - 1][u_\exp]$
$\line{8}{\bi}{\quad\quad \preimage\dot\extend(\leencode(\Label_u) \as \Safe)}$

$\line{9}{\bi}{\return \Sha{254}(\preimage) \as \Fqsafe}$

Column Commitments

The column commitment process is used commit to a replica’s labeling $\Labels$. The column commitment $\CommC$ is generated by building an $\TreeC: \OctTree$ over the labeling matrix $\Labels$ and taking the tree’s root.

To build a tree over the matrix $\Labels$ we hash each of its $N_\nodes$ number of columns (where each column contains $N_\layers$ number of $\Label$’s) using the hash function $\Poseidon_{11}$ producing $N_\nodes$ number of column digests. The $i^{th}$ column digest is the $i^{th}$ leaf in $\TreeC$.

$\overline{\underline{\Function \createtreecfromlabels(\Labels) \rightarrow \TreeC}}$
$\line{1}{\bi}{\leaves: {\Fq}^{[N_\nodes]} = [\ ]}$
$\line{2}{\bi}{\for v \in [N_\nodes]:}$
$\line{3}{\bi}{\quad \column_v: \Fqsafe^{[N_\layers]} = \Labels[:][v]}$
$\line{4}{\bi}{\quad \digest: \Fq = \Poseidon_{11}(\column_v)}$
$\line{5}{\bi}{\quad \leaves\dot\push(\digest)}$
$\line{6}{\bi}{\return \OctTree\cc\new(\leaves)}$

Encoding

Encoding is the process by which a sector $D: \Safe^{[N_\nodes]}$ is transformed into its encoding $R: \Fqsafe^{[N_\nodes]}$. The encoding function is node-wise prime field addition $\oplus$, where “node-wise” means that every distinct $\Safe$ slice $D_i \in D$ is discretely encoded.

$D$ is viewed as an array of $N_\nodes$ distinct byte arrays $D_i: \Safe$. Sector preprocessing ensures that each $D_i$ is a valid $\Safe$ (represents a valid 254-bit or less field element $\Fqsafe)$.

$\bi D: \Safe^{[N_\nodes]} = [D_0, \ldots, D_{N_\nodes - 1}]$
$\bi D_i: \Safe = D[i * 32 \thin\ldotdot\thin (i + 1) * 32]$

A unique encoding key $K$ is derived for every distinct $\ReplicaID$ via the PoRep labeling process producing $\Labels$. Each $D_i \in D$ is encoded by a distinct encoding key $K_i \in K$, where $K_i$ is $i^{th}$ node’s label in the last Stacked-DRG layer.

$\bi K: \Label^{[N_\nodes]} = \Labels[N_\layers - 1][:]$
$\bi K_i: \Label_{i, l_\last} = \Labels[N_\layers - 1][i]$

$D$ is encoded into $R$ via node-wise field addition. Each $D_i \in D$ is interpreted as a field element and encoded into $R_i$ by adding $K_i$ to $D_i$. The resulting array of field elements produced via field addition is the encoding $R$ of $D$.

$\bi R: \Fq^{[N_\nodes]} = [R_0, \ldots, R_{N_\nodes - 1}]$
$\bi R_i: \Fq = D_i \as \Fqsafe \oplus K_i$

The function $\encode$ is used to encode a sector $D$ into $R$ given a an encoding key $K$ derived from $R$’s $\ReplicaID$.

$\overline{\underline{\Function \encode(D: \Safe, K: \Label^{[N_\nodes]}) \rightarrow R}}$
$\line{1}{\bi}{R: \Fq^{[N_\nodes]} = [\ ]}$
$\line{2}{\bi}{\for i \in [N_\nodes]:}$
$\line{3}{\bi}{\quad D_i: \Safe = D[i]}$
$\line{4}{\bi}{\quad K_i: \Label = K[i]}$
$\line{5}{\bi}{\quad R_i = D_i \as \Fqsafe \oplus K_i}$
$\line{6}{\bi}{\quad R\dot\push(R_i)}$
$\line{7}{\bi}{\return R}$

Replication

Replication is the entire process by which a sector $D$ is uniquely encoded into a replica $R$. Replication encompasses Stacked-DRG labeling, encoding $D$ into $R$, and the generation of trees $\TreeC$ over $\Labels$ and $\TreeR$ over $R$.

A miner derives a unique $\ReplicaID$ for each $R$ using the commitment to the replica’s sector $\CommD = \TreeD\dot\root \thin$ (where $\TreeD$ is build over the nodes of the unencoded sector $D$ associated with $R \thin$).

Given a sector $D$ and its commitment $\CommD$, replication proceeds as follows:

  1. Generate the $R$’s unique $\ReplicaID$.
  2. Generate $\Labels$ from $\ReplicaID$, thus deriving the key $K$ that encodes $D$ into $R$.
  3. Generate $\TreeC$ over the columns of $\Labels$ via the column commitment process.
  4. Encode $D$ into $R$ using the encoding key $K$.
  5. Generate a $\TreeR: \OctTree$ over the replica $R$.
  6. Commit to $R$ and its associated labeling $\Labels$ via the commitment $\CommCR$.

The function $\replicate$ runs the entire replication process for a sector $D$.

$\overline{\Function \replicate( \qquad\qquad\qquad\qquad\qquad\quad\bi\ }$
$\quad D: \Safe^{[N_\nodes]},$
$\quad \CommD: \Safe,$
$\quad \SectorID_D: \u{64},$
$\quad \R_\replicaid: \Byte^{[32]},$
$\underline{) \rightarrow \ReplicaID, R, \TreeC, \TreeR, \CommCR, \Labels}$
$\line{1}{\bi}{\ReplicaID = \createreplicaid(\ProverID, \SectorID, \R_\replicaid, \CommD, \PorepID)}$
$\line{2}{\bi}{\Labels = \generatelabels(\ReplicaID)}$
$\line{3}{\bi}{\TreeC = \createtreecfromlabels(\Labels)}$
$\line{4}{\bi}{K: \Label^{[N_\nodes]} = \Labels[N_\layers - 1][:]}$
$\line{5}{\bi}{R: \Fq^{[N_\nodes]} = \textsf{encode}(D, K)}$
$\line{6}{\bi}{\TreeR = \OctTree\cc\new(R)}$
$\line{7}{\bi}{\CommCR: \Fq = \Poseidon_2([\TreeC\dot\root, \TreeR\dot\root])}$
$\line{8}{\bi}{\return \ReplicaID, R, \TreeC, \TreeR, \CommCR, \Labels}$

ReplicaID Generation

The function $\createreplicaid$ describes how a miner having the ID $\ProverID$ is able to generate a $\ReplicaID$ for a replica $R$ of sector $D$, where $D$ has a unique ID $\SectorID$ and commitment $\CommD$. The prover uses a unique random value $\R_\ReplicaID$ for each $\ReplicaID$ generated.

Implementation: storage_proofs::porep::stacked::vanilla::params::generate_replica_id()

$\overline{\Function \createreplicaid(\ }$
$\quad \ProverID: \Byte^{[32]},$
$\quad \SectorID: \u{64},$
$\quad \R_\replicaid: \Byte^{[32]},$
$\quad \CommD: \Safe,$
$\quad \PorepID: \Byte^{[32]},$
$\underline{) \rightarrow \ReplicaID \qquad\qquad\qquad\bi\ }$
$\line{1}{}{\preimage: \Byte^{[136]} =}$
$\quad\quad \ProverID$
$\quad\quad \|\ \beencode(\SectorID) \as \Byte^{[8]}$
$\quad\quad \|\ \R_\ReplicaID$
$\quad\quad \|\ \CommD$
$\quad\quad \|\ \PorepID$

$\line{2}{}{\return \Sha{254}(\preimage) \as \Fqsafe}$

Sector Construction

A sector $D$ is constructed from Filecoin client data where the aggregating of client data of has been preprocessed/bit-padded such that two zero bits are placed between each distinct 254-bit slice of client data. This padding process results in a sector $D$ such that every 256-bit slice represents a valid 254-bit field element $\Safe \thin$.

A Merkle tree $\TreeD: \BinTree$ is constructed for sector $D$ whose leaves are the 256-bit slices $D_i: \Safe \in D \thin$.

$\bi D_i: \Safe = D[i * 32 \thin\ldotdot\thin (i + 1) * 32]$
$\bi \TreeD = \BinTree\cc\new([D_0, \ldots, D_{N_\nodes - 1}])$
$\bi \CommD: \Safe = \TreeD\dot\root$

Each $\TreeD$ is constructed over the preprocessed sector data $D$.

PoRep Challenges

The function $\getporepchallenges$ creates the PoRep challenge set for a replica $R$’s partition-$k$ PoRep partition proof.

Implementation: storage_proofs::porep::stacked::vanilla::challenges::LayerChallenges::derive_internal()

$\overline{\Function\ \getporepchallenges( \quad}$
$\quad \ReplicaID,$
$\quad \R_\porepchallenges: \Byte^{[32]},$
$\quad k: [N_{\poreppartitions / \batch}],$
$\underline{) \rightarrow \PorepChallenges_{R, k} \qquad\qquad\qquad}$
$\line{1}{\bi}{\challenges: \NodeIndex^{[N_{\porepchallenges / k}]} = [\ ]}$
$\line{2}{\bi}{\for \challengeindex \in [N_{\porepchallenges / k}]:}$
$\line{3}{\bi}{\quad \challengeindex_\porepbatch: \u{32} = k * N_{\porepchallenges / k} + \challengeindex}$
$\line{4}{\bi}{\quad \preimage: \Byte^{[68]} =}$
$\quad\quad\quad \leencode(\ReplicaID) \as \Byte^{[32]}$
$\quad\quad\quad \|\ \R_\porepchallenges$
$\quad\quad\quad \|\ \leencode(\challengeindex_\porepbatch) \as \Byte^{[4]}$
$\line{5}{\bi}{\quad \digest: \Byte^{[32]} = \Sha{256}(\preimage)}$
$\line{6}{\bi}{\quad \digestint: \u{256} = \ledecode(\digest)}$
$\line{7}{\bi}{\quad c: \NodeIndex \setminus 0 = (\digestint \MOD (N_\nodes - 1)) + 1}$
$\line{8}{\bi}{\quad \challenges\dot\push(c)}$
$\line{9}{\bi}{\return \challenges}$

Vanilla PoRep

Proving

A PoRep prover generates a $\PorepPartitionProof_k$ for each partition $k$ in a replica $R$’s batch of PoRep proofs. Each partition proof is generated for $N_{\porepchallenges / k}$ number of challenges, the challenge set $\PorepChallenges_{R, k}$ (each partition proof’s challenge set is specific to the replica $R$ and partition $k$).

A single partition proof generated by a PoRep prover shows that:

  • The prover knows a valid Merkle path for $c$ in $\TreeD$ that is consistent with the public $\CommD$.
  • The prover knows valid Merkle paths for $c$ in trees $\TreeC$ and $\TreeR$ which are consistent with the committed to $\CommCR$.
  • The prover knows $c$’s labeling in each Stacked-DRG layer $\Column_c = \ColumnProof_c\dot\column \thin$ by hashing $\Column_c$ into a leaf in $\TreeC$ that is consistent with $\CommCR$.
  • For each layer $l$ in the Stacked-DRG, the prover knows $c$’s labeling preimage $\ParentLabels$ (taken from the columns in $\ParentColumnProofs$), such that the parent labels are consistent with $\CommCR$.
  • The prover knows the key $K_c$ used to encode $D_c$ into $R_c$ (where $D_c$, $K_c$, and $R_c$ were already shown to be consistent with the commitments $\CommD$ and $\CommCR$).
$\overline{\mathbf{Function:}\ \createvanillaporepproof(\ }$
$\quad k,$
$\quad \ReplicaID,$
$\quad \TreeD,$
$\quad \TreeC,$
$\quad \TreeR,$
$\quad \Labels,$
$\quad \R_\porepchallenges: \Byte^{[32]},$
$\underline{) \rightarrow \PorepPartitionProof_{R, k} \qquad\qquad\qquad}$
$\line{1}{\bi}{\PorepPartitionProof_{R, k}: \PorepChallengeProof^{\thin[N_{\porepchallenges / k}]} = [\ ]}$
$\line{2}{\bi}{\PorepChallenges_{R, k} = \getporepchallenges(\ReplicaID, \R_\porepchallenges, k)}$

$\line{3}{\bi}{\for c \in \PorepChallenges_{R, k}:}$
$\line{4}{\bi}{\quad \TreeDProof_c = \TreeD\dot\createproof(c)}$

$\line{5}{\bi}{\quad \ColumnProof_c\ \{}$
$\quad\quad\quad \column: \Labels[:][c],$
$\quad\quad\quad \leaf,\thin \root,\thin \path \Leftarrow \TreeC\dot\createproof(c),$
$\quad\quad \}$

$\line{6}{\bi}{\quad \TreeRProof_c = \TreeR\dot\createproof(c)}$

$\line{7}{\bi}{\quad \ParentColumnProofs_{\mathbf{u}_\total}: \ColumnProof^{[d_\total]} = [\ ]}$
$\line{8}{\bi}{\quad \mathbf{u}_\total: \NodeIndex^{[d_\total]} = \getallparents(c, \PorepID)}$
$\line{9}{\bi}{\quad \for u \in \mathbf{u}_\total:}$
$\line{10}{}{\quad\quad \ColumnProof_u\ \{}$
$\quad\quad\quad\quad \column: \Labels[:][u],$
$\quad\quad\quad\quad \leaf,\thin \root,\thin \path \Leftarrow \TreeC\dot\createproof(u),$
$\quad\quad\quad \}$
$\line{11}{}{\quad\quad \ParentColumnProofs_{\mathbf{u}_\total}\dot\push(\ColumnProof_u)}$

$\line{12}{}{\quad \PorepChallengeProof_c\ \{}$
$\quad\quad\quad\quad \TreeDProof_c,$
$\quad\quad\quad\quad \ColumnProof_c,$
$\quad\quad\quad\quad \TreeRProof_c,$
$\quad\quad\quad\quad \ParentColumnProofs_{\mathbf{u}_\total},$
$\quad\quad \}$
$\line{13}{}{\quad \PorepPartitionProof_{R, k}\dot\push(\PorepChallengeProof_c)}$
$\line{14}{}{\return \PorepPartitionProof_{R, k}}$

Verification

Implementation:

$\overline{\Function\ \verifyvanillaporepproof(}$
$\quad \PorepPartitionProof_{R, k} \thin,$
$\quad k: [N_{\poreppartitions / \batch}],$
$\quad \ReplicaID,$
$\quad \CommD,$
$\quad \CommCR,$
$\quad \R_\porepchallenges: \Byte^{[32]},$
$\underline{) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad}$
$\line{1}{\bi}{\PorepChallenges_{R, k} = \getporepchallenges(\ReplicaID, \R_\porepchallenges, k)}$

$\line{2}{\bi}{\for i \in [N_{\porepchallenges / k}]}:$
$\line{3}{\bi}{\quad c = \PorepChallenges_{R, k}[i]}$
$\line{4}{\bi}{\quad \TreeDProof_c, \ColumnProof_c, \TreeRProof_c, \ParentColumnProofs_{\mathbf{u}_\total}}$
$\quad\quad\quad \Leftarrow \PorepPartitionProof_{R, k}[i]$

$\line{5}{\bi}{\quad \assert(\TreeDProof_c\dot\root = \CommD)}$

$\line{6}{\bi}{\quad \CommCR^\dagger = \Poseidon_2([\ColumnProof_c\dot\root, \TreeRProof_c\dot\root])}$
$\line{7}{\bi}{\quad \assert(\CommCR^\dagger = \CommCR)}$

$\line{8}{\bi}{\quad \assert(\calculatebintreechallenge(\TreeDProof_c\dot\path) = c)}$
$\line{9}{\bi}{\quad \assert(\calculateocttreechallenge(\ColumnProof_c) = c)}$
$\line{10}{}{\quad \assert(\calculateocttreechallenge(\TreeRProof_c) = c)}$

$\line{11}{}{\quad \assert(\bintreeproofisvalid(\TreeDProof_c))}$
$\line{12}{}{\quad \assert(\octtreeproofisvalid(\ColumnProof_c))}$
$\line{13}{}{\quad \assert(\octtreeproofisvalid(\TreeRProof_c))}$

$\line{14}{}{\quad \assert(\ColumnProof_c.\leaf = \Poseidon_{11}(\ColumnProof_c.\column))}$

$\line{15}{}{\quad \mathbf{u}_\total: \NodeIndex^{[d_\total]} = \getallparents(c, \PorepID)}$
$\line{16}{}{\quad \for p \in [d_\total]:}$
$\line{17}{}{\quad\quad u = \mathbf{u}_\total[p]}$
$\line{18}{}{\quad\quad \ColumnProof_u = \ParentColumnProofs_{\mathbf{u}_\total}[p]}$
$\line{19}{}{\quad\quad \assert(\ColumnProof_u.\root = \ColumnProof_c.\root)}$
$\line{20}{}{\quad\quad \assert(\calculateocttreechallenge(\ColumnProof_u\dot\path) = u)}$
$\line{21}{}{\quad\quad \assert(\octtreeproofisvalid(\ColumnProof_u))}$
$\line{22}{}{\quad\quad \assert(\ColumnProof_u.\leaf = \Poseidon_{11}(\ColumnProof_u.\column))}$

$\line{23}{}{\quad \for l \in [N_\layers]:}$
$\line{24}{}{\quad\quad \calculatedlabel_{c, l} = \createlabel_\V(\ReplicaID, l, c, \ParentColumnProofs_{\mathbf{u}_\total})}$
$\line{25}{}{\quad\quad \assert(\calculatedlabel_{c, l} = \ColumnProof_c.\column[l])}$

$\line{26}{}{\quad D_c = \TreeDProof.\leaf}$
$\line{27}{}{\quad {R_c}^\dagger = \TreeRProof.\leaf}$
$\line{28}{}{\quad {K_c}^\dagger = \ColumnProof_c.\column[N_\layers - 1]}$
$\line{29}{}{\quad \assert({R_c}^\dagger \ominus {K_c}^\dagger = D_c)}$
Verifier Labeling

Implementation: storage_proofs::porep::stacked::vanilla::labeling_proof::LabelingProof::create_label()

Additional Notation:

$\createlabel_\V$
Designates the function $\createlabel$ as being used by a PoRep verifier $\V$.

$c: \NodeIndex \setminus 0$
The node index of a PoRep challenge. The first node 0 is never challenged in PoRep proofs.

$d = \big[ \tern{l_c = 0}{d_\drg}{d_\total \big]}$
The number of parents that challenge $c$ has (where $c$ is in the layer $l_c$).

$\Label_{c, l}^\dagger: \Fqsafe$
The label of the challenge node $c$ in layer $l$ calculated from the unverified $\ParentColumnProofs^\dagger$.

$\Label_{u_\exp, l - 1}: \Fqsafe$
The label of challenge $c$’s expander parent $u_\exp$ in layer $l - 1$. Expander parents come from the layer prior to $c$’s layer $l$.

$p_\drg \in [d_\drg]$
$p_\total \in [d_\total]$ The index of a parent in $c$’s parent arrays $\mathbf{u}_\drg$ and $\mathbf{u}_\total$ respectively.

$u_\drg, u_\exp: \NodeIndex$
The node index of a DRG or expander graph parent for $c$.

$\parentlabels': \Label^{[N_\parentlabels]}$
The set of parent labels repeated until its length is $N_\parentlabels$.

$\overline{\Function: \createlabel_\V( \qquad\ }$
$\quad \ReplicaID,$
$\quad l: [N_\layers],$
$\quad c: \NodeIndex \setminus 0,$
$\quad \ParentColumnProofs_{\mathbf{u}_\total}^\dagger,$
$\underline{) \rightarrow \Label_{c, l}^\dagger \qquad\qquad\qquad\qquad\quad}$
$\line{1}{\bi}{\parentlabels: {\Label_u}^{[d]} = [\ ]}$

$\line{2}{\bi}{\for p_\drg \in [d_\drg]:}$
$\line{3}{\bi}{\quad\quad \Label_{u_\drg, l} = \ParentColumnProofs_{\mathbf{u}_\total}[p_\drg]\dot\column[l]}$
$\line{4}{\bi}{\quad\quad \parentlabels\dot\push(\Label_{u_\drg, l})}$

$\line{5}{\bi}{\if l > 0:}$
$\line{6}{\bi}{\quad \for p_\exp \in [d_\drg, d_\total - 1]:}$
$\line{7}{\bi}{\quad\quad \Label_{u_\exp, l - 1} = \ParentColumnProofs_{\mathbf{u}_\total}[p_\exp]\dot\column[l - 1]}$
$\line{8}{\bi}{\quad\quad \parentlabels\dot\push(\Label_{u_\exp, l - 1})}$

$\line{9}{\bi}{\parentlabels': \Label^{[N_\parentlabels]} = \parentlabels\dot\repeattolength(N_\parentlabels)}$

$\line{10}{}{\preimage: \Byte^{[1228]} =}$
$\quad\quad \ReplicaID$
$\quad\quad \|\ \beencode(l \as \u{32}) \as \Byte^{[4]}$
$\quad\quad \|\ \beencode(c \as \u{64}) \as \Byte^{[8]}$
$\quad\quad \big\|_{\Label_u \hspace{1pt} \in \hspace{1pt} \parentlabels'} \thin \Label_u \as \Byte^{[32]}$

$\line{11}{}{\return \Sha{254}(\preimage) \as \Fq}$

PoRep Circuit

Implementation:

Additional Notation:

$\PorepPartitionProof_{R, k}$
The $k^{th}$ PoRep partition proof generated for the replica $R$.

$\treedleaf_{\auxb, c}$
The circuit value for a challenge $c$’s leaf in $\TreeD$.

$\calculatedcommd_{\auxb, c}$
The circuit value calculated for $\CommD$ using challenge $c$’s $\TreeDProof_c$.

$\column_{[\auxb], u}$
The array circuit values representing a parent $u$ of challenge $c$’s label in each Stacked-DRG layer.

$\parentcolumns_{[[\auxb]], \mathbf{u}_\total}$
An array of an array of circuit values, the allocated column for each parent $u \in \mathbf{u}_\total \thin$.

$l_c$
The challenge $c$’s layer in the Stacked-DRG.

$\parentlabelsbits_{[[\auxb + \constb, \lebytes]]}$
An array where each element is a parent $u$’s label, an array of allocated and unallocated bits $_{[\auxb + \constb]}$ having $\lebytes$ bit order.

$\calculatedlabel_{\auxb, c, l}$
The label calculated for challenge $c$ in residing in layer $l$.

$\overline{\Function \createporepcircuit(}$
$\quad \PorepPartitionProof_{R, k} \thin,$
$\quad k,$
$\quad \ReplicaID,$
$\quad \CommD,$
$\quad \CommC,$
$\quad \CommR,$
$\quad \CommCR,$
$\quad \R_\porepchallenges: \Byte^{[32]},$
$\underline{) \rightarrow \RCS \qquad\qquad\qquad\qquad\qquad}$
$\line{1}{\bi}{\cs = \RCS\cc\new()}$

$\line{2}{\bi}{\replicaid_\pubb: \CircuitVal \deq \cs\dot\publicinput(\ReplicaID)}$
$\line{3}{\bi}{\replicaidbits_{[\auxb, \Le]}: \CircuitBit^{[255]}\ \deq \lebitsgadget(\cs, \replicaid_\pubb, 255)}$
$\line{4}{\bi}{\replicaidbits_{[\auxb+\constb, \lebytes]}: \CircuitBitOrConst^{[256]} =}$
$\quad\quad \lebitstolebytes(\replicaidbits_{[\auxb, \Le]})$

$\line{5}{\bi}{\commd_\pubb: \CircuitVal \deq \cs\dot\publicinput(\CommD)}$
$\line{6}{\bi}{\commcr_\pubb: \CircuitVal \deq \cs\dot\publicinput(\CommCR)}$

$\line{7}{\bi}{\commc_\auxb: \CircuitVal \deq \cs\dot\privateinput(\CommC)}$
$\line{8}{\bi}{\commr_\auxb: \CircuitVal \deq \cs\dot\privateinput(\CommR)}$

$\line{9}{\bi}{\calculatedcommcr_\auxb: \CircuitVal\ \deq}$
$\quad\quad \poseidongadget{2}(\cs, [\commc_\auxb, \thin \commr_\auxb])$
$\line{10}{}{\cs\dot\assert(\calculatedcommcr_\auxb = \commcr_\pubb)}$

$\line{11}{}{\PorepChallenges_k = \getporepchallenges(\ReplicaID, \R_\porepchallenges, k)}$

$\line{12}{}{\for i \in [N_{\porepchallenges / k}]}:$
$\line{13}{}{\quad c: \NodeIndex = \PorepChallenges[i]}$
$\line{14}{}{\quad \TreeDProof_c, \ColumnProof_c, \TreeRProof_c, \ParentColumnProofs_{\mathbf{u}_\total}}$
$\quad\quad\quad \Leftarrow \PorepPartitionProof[i]$

$\line{15}{}{\quad \challengebits_{[\auxb, \Le]}: \CircuitBit^{[64]} \deq \lebitsgadget(\cs, c, 64)}$
$\line{16}{}{\quad \packedchallenge_\pubb: \CircuitVal\ \deq}$
$\quad\quad\quad \packbitsasinputgadget(\cs, \challengebits_{[\auxb, \Le]})$

$\line{17}{}{\quad \treedleaf_{\auxb, c} \deq \cs\dot\privateinput(\TreeDProof_c\dot\leaf)}$
$\line{18}{}{\quad \calculatedcommd_{\auxb, c}\ \deq}$
$\quad\quad\quad \bintreerootgadget(\cs, \treedleaf_{\auxb, c}\thin, \TreeDProof_c\dot\path)$
$\line{19}{}{\quad \cs\dot\assert(\calculatedcommd_{\auxb, c} = \commd_\pubb)}$

$\line{20}{}{\quad \parentcolumns_{[[\auxb]], \mathbf{u}_\total}: {\CircuitVal^{[N_\layers]}}^{[d_\total]} = [\ ]}$
$\line{21}{}{\quad \for \ColumnProof_u \in \ParentColumnProofs_{\mathbf{u}_\total}:}$
$\line{22}{}{\quad\quad \column_{[\auxb], u}: \CircuitVal^{[N_\layers]}\ \deq}$
$\quad\quad\quad\quad [\thin \cs\dot\privateinput(\label_{u, l}) \mid \forall\thin \label_{u, l} \in \ColumnProof_u\dot\column \thin]$
$\line{23}{}{\quad\quad \calculatedtreecleaf_{\auxb, u}: \CircuitVal \deq \poseidongadget{11}(\cs, \column_{[\auxb], u})}$
$\line{24}{}{\quad\quad \calculatedcommc_{\auxb, u}: \CircuitVal\ \deq}$
$\quad\quad\quad\quad \octtreerootgadget(\cs,\thin \calculatedtreecleaf_{\auxb, u},\thin \ColumnProof_u\dot\path)$
$\line{25}{}{\quad\quad \cs\dot\assert(\calculatedcommc_{\auxb, c} = \commc_\auxb)}$
$\line{26}{}{\quad\quad \parentcolumns_{[[\auxb]], \mathbf{u}_\total}\dot\push(\column_{[\auxb], u})}$

$\line{27}{}{\quad \calculatedcolumn_{[\auxb], c}: \CircuitVal^{[N_\layers]} = [\ ]}$
$\line{28}{}{\quad \for l_c \in [N_\layers]:}$
$\line{29}{}{\quad\quad \layerbits_{[\auxb, \Le]}: \CircuitBit^{[32]} \deq \lebitsgadget(\cs, l_c, 32)}$

$\line{30}{}{\quad\quad \parentlabels_{[\auxb]}: \CircuitVal^{[*]} = [\ ]}$
$\line{28}{}{\quad\quad \for p_\drg \in [d_\drg]:}$
$\line{31}{}{\quad\quad\quad \parentlabel_{\auxb, u_\drg} = \parentcolumns_{[[\auxb]], \mathbf{u}_\total}[p_\drg][l_c]}$
$\line{32}{}{\quad\quad\quad \parentlabels_{[\auxb]}\dot\push(\parentlabel_{\auxb, u_\drg})}$
$\line{33}{}{\quad\quad \if l_c > 0:}$
$\line{34}{}{\quad\quad\quad \for p_\exp \in [d_\drg, d_\total - 1]:}$
$\line{35}{}{\quad\quad\quad\quad \parentlabel_{\auxb, u_\exp} = \parentcolumns_{[[\auxb]], \mathbf{u}_\total}[p_\exp][l_c - 1]}$
$\line{36}{}{\quad\quad\quad\quad \parentlabels_{[\auxb]}\dot\push(\parentlabel_{\auxb, u_\exp})}$

$\line{37}{}{\quad\quad \parentlabelsbits_{[[\auxb + \constb, \lebytes]]}: {\CircuitBitOrConst^{[256]}}^{[d_\drg\ \text{or}\ d_\exp]} = [\ ]}$
$\line{38}{}{\quad\quad \for \parentlabel_\auxb \in \parentlabels_{[\auxb]}:}$
$\line{39}{}{\quad\quad\quad \parentlabelbits_{[\auxb, \Le]}: \CircuitBit^{[255]} \deq}$
$\quad\quad\quad\quad\quad\quad \lebitsgadget(\cs, \parentlabel_\auxb, 255)$
$\line{40}{}{\quad\quad\quad \parentlabelbits_{[\auxb + \constb, \lebytes]}: \CircuitBitOrConst^{[256]} =}$
$\quad\quad\quad\quad\quad\quad \lebitstolebytes(\parentlabelbits_{[\auxb, \Le]})$
$\line{41}{}{\quad\quad\quad \parentlabelsbits_{[[\auxb + \constb, \lebytes]]}\dot\push(\parentlabelbits_{[\auxb + \constb, \lebytes]})}$
$\line{42}{}{\quad\quad \parentlabelsbits_{[[\auxb + \constb, \lebytes]]}\dot\repeat(N_\parentlabels)}$

$\line{43}{}{\quad\quad \calculatedlabel_{\auxb, c, l}: \CircuitVal \deq \createlabelgadget(}$
$\quad\quad\quad\quad \cs,$
$\quad\quad\quad\quad \replicaidbits_{[\auxb + \constb, \lebytes]} \thin,$
$\quad\quad\quad\quad \layerbits_{[\auxb, \Le]} \thin,$
$\quad\quad\quad\quad \challengebits_{[\auxb, \Le]},$
$\quad\quad\quad\quad \parentlabelsbits_{[[\auxb + \constb, \lebytes]]} \thin,$
$\quad\quad\quad)$
$\line{44}{}{\quad\quad \calculatedcolumn_{[\auxb], c}\dot\push(\calculatedlabel_{\auxb, c, l})}$

$\line{45}{}{\quad \calculatedcommr_{\auxb, c}: \CircuitVal\ \deq}$
$\quad\quad\quad \octtreerootgadget(\cs,\thin \calculatedtreerleaf_{\auxb, c} \thin, \TreeRProof_c\dot\path)$
$\line{46}{}{\quad \cs\dot\assert(\calculatedcommr_{\auxb, c} = \commr_\auxb)}$

$\line{47}{}{\quad \encodingkey_{\auxb, c}: \CircuitVal = \calculatedcolumn_{[\auxb], c}[N_\layers - 1]}$
$\line{48}{}{\quad \calculatedtreerleaf_{\auxb, c}: \CircuitVal\ \deq}$
$\quad\quad\quad \encodegadget(\cs,\thin \treedleaf_{\auxb, c} \thin, \encodingkey_{\auxb, c})$

$\line{49}{}{\quad \calculatedtreecleaf_{\auxb, c}: \CircuitVal\ \deq}$
$\quad\quad\quad \poseidongadget{11}(\cs, \calculatedcolumn_{[\auxb], c})$
$\line{50}{}{\quad \calculatedcommc_{\auxb, c}: \CircuitVal\ \deq}$
$\quad\quad\quad \octtreerootgadget(\cs,\thin \calculatedtreecleaf_{\auxb, c} \thin, \ColumnProof_c\dot\path)$
$\line{51}{}{\quad \cs\dot\assert(\calculatedcommc_{\auxb, c} = \commc_\auxb)}$

$\line{52}{}{\return \cs}$

Code Comments:

  • Lines 9-10: Computes $\CommCR^\dagger$ within the circuit from the witnessed commitments and assert that $\CommCR^\dagger$ is equal to the public input $\CommCR$.
  • Lines 15-16: Adds the packed challenge $c$ as a public input, used when calculating each challenge $c$’s column within the circuit.
  • Lines 17-19: Verifies $c$’s $\TreeDProof_c$ by computing $\CommD_c^\dagger$ within the circuit and asserting that it is equal to the public input $\CommD$.
  • Lines 20-26: Allocates each of $c$’s parent’s $u \in \mathbf{u}_\total$ label and checks that $u$’s $\ColumnProof_u$ is consistent with the previously verified $\CommC^\dagger \mapsto \CommC \thin$.
  • Lines 27-44: Calculates challenge $c$’s label in each Stacked-DRG layer $l$ within the circuit using each parent’s allocated column.
  • Lines 45-46: Verifies that $c$’s $\TreeRProof_c$ is consistent with the previously verified $\CommR^\dagger \mapsto \CommR$.
  • Lines 47-48: Checks that the calculated encoding key $K_c^\dagger$ for $c$ encodes the previously verified sector and replica tree leaves $D_c^\dagger \mapsto D_c$ into $R_c^\dagger \mapsto R_c$.
  • Lines 49-51: Verifies $c$’s $\ColumnProof_c$ against the previously verified $\CommC$.

PoSt Challenges

The function $\getpostchallenge$ is used to derive a Merkle challenge for a Winning or Window PoSt proof.

Implementation: storage_proofs::post::fallback::vanilla::generate_leaf_challenge()

Additional Notation:

$\R_{\postchallenges, \batch \thin \aww}$
A random value used to derive the challenge set for each of a PoSt prover’s partition proofs in their current Winning or Window PoSt proof batch.

$\SectorID$
The ID for the sector $D$ associated with the replica $R$ for which this Merkle challemnge is being generated.

$\challengeindex_\batch$
The unique index of a Merkle challenge across all PoSt partition proofs that a PoSt prover is generating. For all partition proofs in the same PoSt batch, every Merkle challenge across all replicas will have a unique $\challengeindex_\batch \thin$.

$\overline{\Function\getpostchallenge(\qquad\qquad}$
$\quad \R_{\postchallenges, \batch \thin \aww}: \Fq,$
$\quad \SectorID: \u{64},$
$\quad \challengeindex_\batch: \u{64},$
$\underline{) \rightarrow \NodeIndex \qquad\qquad\qquad\qquad\qquad\quad}$
$\line{1}{\bi}{\preimage: \Byte^{[48]} =}$
$\quad\quad \leencode(\R_{\postchallenges, \batch \thin \aww}) \as \Byte^{[32]}$
$\quad\quad \|\ \leencode(\SectorID) \as \Byte^{[8]}$
$\quad\quad \|\ \leencode(\challengeindex_\batch) \as \Byte^{[8]}$

$\line{2}{\bi}{\digest: \Byte^{[32]} = \Sha{256}(\preimage)}$
$\line{3}{\bi}{\digestint: \u{64} = \ledecode(\digest[\ldotdot 8])}$
$\line{4}{\bi}{\return \digestint \MOD N_\nodes}$

Code Comments:

  • Line 4: modding by $N_\nodes$ takes the 64-bit $\digestint$ to a 32-bit node index $\NodeIndex$.

Vanilla PoSt

Proving

Implementation:

Additional Notation:

$\nreplicas_k$
The number of distinct replicas that the prover has for this PoSt partition proof.

$\replicaindex_k$
$\replicaindex_\batch$ The index of a challenged replica $R$ in a partition $k$’s partition proof and the index of the challenged replica across all partition proofs that a prover is generating for batch.

$\challengeindex_R$
$\challengeindex_\batch$ The index of a Merkle challenge in a challenged replica $R$ and the index of the Merkle challenge across all partition proofs that a prover is generating for batch.

$\TreeR_R, \CommC_R, \CommCR_R, \TreeRProofs_R$
The subscript $_R$ denotes each of these values as being for the replica $R$ which is distinct within the prover’s PoSt batch.

$\ell_\pad$
The number of non-distinct $\PostReplicaProof{\bf \sf s}$ that are added as padding to a PoSt prover’s final partition proof in a batch.

$\overline{\Function \createvanillapostproof(\ }$
$\quad k: \mathbb{N},$
$\quad \PostReplicas_{P, k \thin \aww},$
$\quad N_{\postreplicas / k \thin \aww},$
$\quad N_{\postchallenges/R \thin \aww},$
$\quad \R_{\postchallenges \thin \aww}: \Fq,$
$\underline{) \rightarrow \PostPartitionProof_{k \thin \aww} \qquad}$
$\line{1}{\bi}{\PostPartitionProof_{k \thin \aww} = [\ ]}$
$\line{2}{\bi}{\nreplicas_k = \len(\PostReplicas_k)}$

$\line{3}{\bi}{\for \replicaindex_k \in [\nreplicas_k]}$
$\line{4}{\bi}{\quad \TreeR_R, \CommC_R, \SectorID_R \Leftarrow \PostReplicas_k[\replicaindex_k]}$
$\line{5}{\bi}{\quad \replicaindex_\batch: \u{64} = k * N_{\postreplicas / k} + \replicaindex_k}$
$\line{6}{\bi}{\quad \TreeRProofs_R: {\TreeRProof}^{\thin[N_{\postchallenges / R}]} = [\ ]}$
$\line{7}{\bi}{\quad \for \challengeindex_R \in [N_{\postchallenges / R}]:}$
$\line{8}{\bi}{\quad\quad \challengeindex_\batch: \u{64} =}$
$\quad\quad\quad\quad \replicaindex_\batch * N_{\postchallenges / R} + \challengeindex_R$
$\line{9}{\bi}{\quad\quad c = \getpostchallenge(\R_\postchallenges, \SectorID, \challengeindex_\batch)}$
$\line{10}{}{\quad\quad \TreeRProof_c = \TreeR\dot\createproof(c)}$
$\line{11}{}{\quad\quad \TreeRProofs\dot\push(\TreeRProof_c)}$
$\line{12}{}{\quad \PostPartitionProof\dot\push(\PostReplicaProof \{\thin \TreeRProofs,\thin \CommC\thin \})}$

$\line{13}{}{\ell_\textsf{pad} = N_{\postreplicas / k} - \nreplicas_k}$
$\line{14}{}{\for i \in [\ell_\textsf{pad}]}$
$\line{15}{}{\quad \PostPartitionProof\dot\push(\PostPartitionProof[\nreplicas_k - 1])}$

$\line{16}{}{\return \PartitionProof_{k \thin \aww}}$

Code Comments:

  • Lines 13-15: If the prover does not have enough replicas to fill an entire PoSt partition proof, pad the partition proof with copies of the last distinct replica’s $\PostReplicaProof_R \thin$.

Verification

Implementation: storage_proofs::post::fallback::vanilla::FallbackPoSt::verify_all_partitions()

Additional Notation:

$k: N_{\postpartitions / \batch, \P \thin \aww}$
The number of partitions in a Winning or Window PoSt batch is dependent on the length of the PoSt prover $\P$’s replica set.

$\nreplicas_k$
The number of distinct replicas that the prover has for this PoSt partition proof.

$\replicaindex_k$
$\replicaindex_\batch$
The index of a challenged replica $R$ in a partition $k$’s partition proofs in a PoSt prover’s batch.

$\challengeindex_R$
$\challengeindex_\batch$
The index of a Merkle challenge in a challenged replica $R$ and the index of the Merkle challenge across all partition proofs in a PoSt prover’s batch.

$\overline{\Function \verifyvanillapostproof( \bi}$
$\quad \PostPartitionProof_{k \thin \aww},$
$\quad k: [N_{\postpartitions / \batch, \P \thin \aww}],$
$\quad \PostReplicas_{\V, k \thin \aww},$
$\quad N_{\postreplicas / k \thin \aww},$
$\quad N_{\postchallenges / R \thin \aww},$
$\quad \R_\postchallenges: \Fq,$
$\underline{) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\bi}$
$\line{1}{\bi}{\nreplicas_k = \len(\PostReplicas_{\V, k})}$

$\line{2}{\bi}{\for \replicaindex_k \in [\nreplicas_k]:}$
$\line{3}{\bi}{\quad \replicaindex_\batch = k * N_{\postreplicas / k} + \replicaindex_k}$

$\line{4}{\bi}{\quad \SectorID, \CommCR \Leftarrow \PostReplicas_{\V, k}[\replicaindex_k]}$
$\line{5}{\bi}{\quad \CommC^\dagger, \TreeRProofs^\dagger \Leftarrow \PostPartitionProof[\replicaindex_k]}$
$\line{6}{\bi}{\quad \CommR^\dagger = \TreeRProofs^\dagger[0]\dot\root}$

$\line{7}{\bi}{\quad \CommCR^\dagger = \Poseidon{2}([\CommC^\dagger, \CommR^\dagger])}$
$\line{8}{\bi}{\quad \assert(\CommCR^\dagger = \CommCR)}$

$\line{9}{\bi}{\quad \for \challengeindex_R \in [N_{\postchallenges / R}]:}$
$\line{10}{}{\quad\quad \challengeindex_\batch: \u{64} =}$
$\quad\quad\quad\quad \replicaindex_\batch * N_{\postreplicas / k} + \challengeindex_R$
$\line{11}{}{\quad\quad c = \getpostchallenge(\R_\postchallenges, \SectorID, \challengeindex_\batch)}$

$\line{12}{}{\quad\quad \TreeRProof^\dagger = \TreeRProofs^\dagger[\challengeindex_R]}$
$\line{13}{}{\quad\quad \assert(\TreeRProof^\dagger\dot\root = \CommR)}$
$\line{14}{}{\quad\quad \assert(\calculateocttreechallenge(\TreeRProof^\dagger\dot\path) = c)}$
$\line{15}{}{\quad\quad \assert(\octtreeproofisvalid(\TreeRProof^\dagger))}$

Code Comments:

  • Line 13: The dagger is removed from $\CommR^\dagger$ (producing $\CommR$) because $\CommR^\dagger$ was verified to be consistent with the committed to $\CommCR$ (Line 8).

PoSt Circuit

The function $\createpostcircuit$ is used to instantiate a Winning or Window PoSt circuit.

Addional Notation:

$\PostPartitionProof_{k \thin \aww}$
The partition-$k$ proof in a PoSt prover’s Winning or Window PoSt batch. $\PostPartitionProof_k$ Contains any padded $\PostReplicaProof{\bf \sf s}$.

$\TreeR_R, \CommC_R, \CommCR_R$
Each $\PostReplica_R \in \PostReplicas_{\P \thin \aww}$ represents a unique replica $R$ in the batch denoted by the subscript $_R \thin$.

$\TreeRProofs_R$
Each $\TreeRProofs$ is for a distinct replica $R$, denoted by the subscript $_R \thin$, in a PoSt batch.

$\overline{\Function \createpostcircuit( \quad\qquad}$
$\quad \PostPartitionProof_{k \thin \aww},$
$\quad \PostReplicas_{\P, k \thin \aww},$
$\quad N_{\postreplicas / k \thin \aww},$
$\underline{) \rightarrow \RCS \qquad\qquad\qquad\qquad\qquad\qquad\bi}$
$\line{1}{\bi}{\cs = \RCS\cc\new()}$
$\line{2}{\bi}{\nreplicas_k = \len(\PostReplicas_{\P, k})}$

$\line{3}{\bi}{\for \replicaindex_k \in [\nreplicas_k]:}$
$\line{4}{\bi}{\quad \TreeR_R, \CommC_R, \CommCR_R \Leftarrow \PostReplicas_{\P, k}[\replicaindex_k]}$
$\line{5}{\bi}{\quad \TreeRProofs_R \Leftarrow \PostPartitionProof_k[\replicaindex_k]}$

$\line{6}{\bi}{\quad \commcr_\pubb: \CircuitVal \deq \cs\dot\publicinput(\CommCR)}$
$\line{7}{\bi}{\quad \commc_\auxb: \CircuitVal \deq \cs\dot\privateinput(\CommC)}$
$\line{8}{\bi}{\quad \commr_\auxb: \CircuitVal \deq \cs\dot\privateinput(\CommR)}$
$\line{9}{\bi}{\quad \calculatedcommcr_\auxb: \CircuitVal\ \deq}$
$\quad\quad\quad \poseidongadget{2}(\cs, [\commc_\auxb, \thin \commr_\auxb])$
$\line{10}{}{\quad \cs\dot\assert(\calculatedcommcr_\auxb = \commcr_\pubb)}$

$\line{11}{}{\quad \for \TreeRProof_c \in \TreeRProofs:}$
$\line{12}{}{\quad\quad \treerleaf_{\auxb, c}: \CircuitVal \deq \cs\dot\privateinput(\TreeRProof_c\dot\leaf)}$
$\line{13}{}{\quad\quad \calculatedcommr_{\auxb, c}: \CircuitVal\ \deq}$
$\quad\quad\quad\quad \octtreerootgadget(\cs,\thin \treerleaf_{\auxb, c} \thin, \TreeRProof_c\dot\path)$
$\line{14}{}{\quad\quad \cs\dot\assert(\calculatedcommr_{\auxb, c} = \commr_\auxb)}$

$\line{15}{}{\return \cs}$

Gadgets

Hash Functions

We make use of the following hash function gadgets, however their implementation is beyond the scope of this document.

$\textsf{sha256\_gadget}(\cs: \RCS,\thin \preimage: \CircuitBitOrConst^{[*]}) \rightarrow \CircuitBit^{[256]}$
$\shagadget{254}{2}(\cs: \RCS,\thin \inputs: \CircuitVal^{[2]}) \rightarrow \CircuitBit^{[254]}$
$\poseidongadget{2}(\cs: \RCS,\thin \inputs: \CircuitVal^{[2]}) \rightarrow \CircuitVal$
$\poseidongadget{8}(\cs: \RCS,\thin \inputs: \CircuitVal^{[8]}) \rightarrow \CircuitVal$
$\poseidongadget{11}(\cs: \RCS,\thin \inputs: \CircuitVal^{[11]}) \rightarrow \CircuitVal$

BinTree Root Gadget

The function $\bintreerootgadget$ calculates and returns a $\BinTree$ Merkle root from an allocated leaf $\leaf_\auxb$ and an unallocated Merkle $\path$. Both the leaf and path are from a Merkle challenge $c$’s proof $\BinTreeProof_c$, where $\path = \BinTreeProof_c\dot\path \thin$.

The gadget adds one public input to the constraint system for the packed Merkle proof path bits $\pathbits_\auxle$ which are the binary representation of the $c$’s DRG node-index $\llcorner c \lrcorner_{2, \Le} \equiv \pathbits_\auxle \thin$).

$\overline{\Function \bintreerootgadget(\qquad\quad}$
$\quad \cs: \RCS,$
$\quad \leaf_\auxb: \CircuitVal,$
$\quad \path: \BinPathElement^{[\BinTreeDepth]},$
$\underline{) \rightarrow \CircuitVal \qquad\qquad\qquad\qquad\qquad\quad}$
$\line{1}{\bi}{\curr_\auxb: \CircuitVal = \leaf_\auxb}$
$\line{2}{\bi}{\pathbits_{[\auxb, \Le]}: \CircuitBit^{[\BinTreeDepth]} = [\ ]}$

$\line{3}{\bi}{\for \sibling, \missing \in \path:}$
$\line{4}{\bi}{\quad \missingbit_\auxb: \CircuitBit \deq \cs\dot\privateinput(\missing)}$
$\line{5}{\bi}{\quad \sibling_\auxb: \CircuitVal \deq \cs\dot\privateinput(\sibling)}$
$\line{6}{\bi}{\quad \inputs_{[\auxb]}: \CircuitVal^{[2]} \deq}$
$\quad\quad\quad \insertgadget{2}(\cs, [\sibling_\auxb],\thin \curr_\auxb,\thin \missingbit_\auxb)$
$\line{7}{\bi}{\quad \curr_\auxb: \CircuitVal \deq \shagadget{254}{2}(\cs, \inputs_{[\auxb]})}$
$\line{8}{\bi}{\quad \pathbits_{[\auxb, \Le]}\dot\push(\missingbit_\auxb)}$

$\line{9}{\bi}{\packedchallenge_\pubb: \CircuitVal\ \deq}$
$\quad\quad \packbitsasinputgadget(\cs, \pathbits_{[\auxb, \Le]})$

$\line{10}{}{\return \curr_\auxb}$

Code Comments:

  • Line 9: A public input is added to $\cs$ for the Merkle challenge $c$ corresponding to the Merkle path which was used to calculate the returned root.
  • Line 10: The final value for $\curr_\auxb$ is the Merkle root calculated from $\leaf_\auxb$ and $\path$.

OctTree Root Gadget

The function $\octtreerootgadget$ calculates and returns an $\OctTree$ Merkle root from an allocated leaf $\leaf_\auxb$ and an unallocated Merkle $\path$. Both the leaf and path are from a Merkle challenge $c$’s proof $\OctTreeProof_c$, where $\path = \OctTreeProof_c\dot\path \thin$.

The gadget adds one public input to the constraint system for the packed Merkle proof path bits $\pathbits_\auxle$ which are the binary representation of the $c$’s DRG node-index $\llcorner c \lrcorner_{2, \Le} \equiv \pathbits_\auxle \thin$).

Note that the constant $3 = \log_2(8)$, the number of bits required to represent an index in the 8-element Merkle hash $\inputs$ array, is used at various times in the following algorithm.

$\overline{\Function \octtreerootgadget( \qquad\quad}$
$\quad \cs: \RCS,$
$\quad \leaf_\auxb: \CircuitVal,$
$\quad \path: \OctPathElement^{[\OctTreeDepth]},$
$\underline{) \rightarrow \CircuitVal \qquad\qquad\qquad\qquad\qquad\quad}$
$\line{1}{\bi}{\curr_\auxb: \CircuitVal = \leaf_\auxb}$
$\line{2}{\bi}{\pathbits_\auxle: \CircuitBit^{[3 * \OctTreeDepth]} = [\ ]}$

$\line{3}{\bi}{\for \siblings, \missing \in \path:}$
$\line{4}{\bi}{\quad \missingbits_\auxle: \CircuitBit^{[3]} = [\ ]}$
$\line{5}{\bi}{\quad \for i \in [3]:}$
$\line{6}{\bi}{\quad\quad \bit: \Bit = (\missing \gg i) \AND 1}$
$\line{7}{\bi}{\quad\quad \bit_\auxb \deq \cs\dot\privateinput(\bit)}$
$\line{8}{\bi}{\quad\quad \missingbits_\auxle\dot\push(\bit_\auxb)}$

$\line{9}{\bi}{\quad \siblings_{[\auxb]}: \CircuitVal^{[7]} = [\ ]}$
$\line{10}{}{\quad \for \sibling \in \siblings:}$
$\line{11}{}{\quad\quad \sibling_\auxb: \CircuitVal \deq \cs\dot\privateinput(\sibling)}$
$\line{12}{}{\quad\quad \siblings_{[\auxb]}\dot\push(\sibling_\auxb)}$

$\line{13}{}{\quad \inputs_{[\auxb]}: \CircuitVal^{[8]}\thin \deq}$
$\quad\quad\quad \insertgadget{8}(\cs, \siblings_{[\auxb]},\thin \curr_\auxb,\thin \missingbits_\auxle)$
$\line{14}{}{\quad \curr_\auxb: \CircuitVal \deq \poseidongadget{8}(\cs, \inputs_{[\auxb]})}$
$\line{15}{}{\quad \pathbits_\auxle\dot\extend(\missingbits_\auxle)}$

$\line{16}{}{\packedchallenge_\pubb: \CircuitVal\ \deq}$
$\quad\quad \packbitsasinputgadget(\cs, \pathbits_{[\auxb, \Le]})$

$\line{17}{}{\return \curr_\auxb}$

Code Comments:

  • Line 1: Not a reallocation of $\leaf_\auxb$ within $\cs$, but is an in-memory copy.
  • Lines 4-8: Witnesses the 3-bit missing index for each path element. The first iteration $i = 0$ corresponds to the least significant bit in $\missing$.
  • Lines 9-12: Witnesses each path element’s 7 Merkle hash inputs (the exlucded 8-th Merkle hash input is the calculated hash input $\curr_\auxb$ for this tree depth).
  • Line 13: Creates the Merkle hash inputs array by inserting $\curr$ into $\siblings$ at index $\missing$.
  • Line 14: Hashes the 8 Merkle hash inputs.
  • Line 16: Adds the challenge $c$ as a public input.
  • Line 17: Returns the calculated root.

Encoding Gadget

The function $\encodegadget$ runs the $\encode$ function within a circuit. Used to encode $\unencoded_{\auxb, v}$ (node $v$’s sector data $D_v$) into $\encoded_{\auxb, v}$ (the replica node $R_v$) given an allocated encoding key $\key_{\auxb, v}$ ($K_v$).

Implementation: storage_proofs::core::gadgets::encode::encode()

$\overline{\Function \encodegadget(\qquad}$
$\quad \cs: \RCS,$
$\quad \unencoded_{\auxb, v}: \CircuitVal,$
$\quad \key_{\auxb, v}: \CircuitVal,$
$\underline{) \rightarrow \CircuitVal \qquad\qquad\qquad\qquad}$
$\line{1}{\bi}{R_v: \Fq = \unencoded_{\auxb, v}\dot\value \oplus \key_{\auxb, v}\dot\value}$
$\line{2}{\bi}{\encoded_{\auxb, v}: \CircuitVal \deq \cs\dot\privateinput(R_v)}$
$\line{3}{\bi}{\lc_A: \LinearCombination \equiv \unencoded_{\auxb, v} + \key_{\auxb, v}}$
$\line{4}{\bi}{\lc_B: \LinearCombination \equiv \cs\dot\one_\pubb}$
$\line{5}{\bi}{\lc_C: \LinearCombination \equiv \encoded_{\auxb, v}}$
$\line{6}{\bi}{\cs\dot\assert(\lc_A * \lc_B = \lc_C)}$
$\line{7}{\bi}{\return \encoded_{\auxb, v}}$

Labeling Gadget

The function $\createlabelgadget$ is used to label a node $\node$ in the Stacked-DRG layer $\layerindex$ given the node’s expanded parent labels $\parentlabels$.

Implementation: storage_proofs::porep::stacked::circuit::create_label::create_label_circuit()

Additional Notation:

$\replicaid_{[\auxb + \constb, \lebytes]}$
The allocated bits (and constant zero bit(s)) representing a $\ReplicaID$.

$\layerindex_{[\auxb, \Le]}$
The allocated bits representing a layer $l \in [N_\layers]$ as an unsigned 32-bit integer.

$\node_{[\auxb, \Le]}$
A node index $v \in [N_\nodes]$ allocated as 64 bits.

$\parentlabels_{[[\auxb + \constb, \lebytes]]}$
An array containing $N_\parentlabels$ allocated bit arrays, where each bit array is the label of one of $\node$’s parents.

$\label$
Is the calculated label for $\node$.

$\overline{\Function \createlabelgadget( \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ }$
$\quad \cs: \RCS,$
$\quad \replicaid_{[\auxb + \constb, \lebytes]}: \CircuitBitOrConst^{[256]} \thin,$
$\quad \layerindex_{[\auxb, \Le]}: \CircuitBit^{[32]} \thin,$
$\quad \node_{[\auxb, \Le]}: \CircuitBit^{[64]} \thin,$
$\quad \parentlabels_{[[\auxb + \constb, \lebytes]]}: {\CircuitBitOrConst^{[256]}}^{[N_\parentlabels]} \thin,$
$\underline{) \rightarrow \CircuitVal \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad}$
$\line{1}{\bi}{\layerindex_{[\auxb, \be]}: \CircuitBit^{[32]} = \reverse(\layerindex_{[\auxb, \Le]})}$
$\line{2}{\bi}{\nodeindex_{[\auxb, \be]}: \CircuitBit^{[64]} = \reverse(\node_{[\auxb, \Le]})}$
$\line{3}{\bi}{\preimage_{[\auxb + \constb]}: \CircuitBitOrConst^{[9984]} =}$
$\quad\quad \replicaid_{[\auxb + \constb, \lebytes]}$
$\quad\quad \|\ \layerindex_{[\auxb, \be]}$
$\quad\quad \|\ \nodeindex_{[\auxb, \be]}$
$\quad\quad \|\ 0^{[160]}$
$\quad\quad \big\|_{\parentlabel \hspace{1pt} \in \hspace{1pt} \parentlabels} \thin \parentlabel_{[\auxb + \constb, \lebytes]} \vphantom{{{|^|}^|}^x}$

$\line{4}{\bi}{\digestbits_{[\auxb, \lebytes]}: \CircuitBit^{[256]} \deq \textsf{sha256\_gadget}(\cs, \preimage_{[\auxb + \constb]})}$
$\line{5}{\bi}{\digestbits_{[\auxb, \Le]}: \CircuitBit^{[256]} = \lebytestolebits(\digestbits_{[\auxb, \lebytes]})}$
$\line{6}{\bi}{\digestbits_{[\auxb, \Le], \safe}: \CircuitBit^{[254]} = \digestbits_{[\auxb, \Le]}[0 \thin\ldotdot\thin 254]}$
$\line{7}{\bi}{\label = \digestbits_{[\auxb, \Le], \safe} \thin\as\thin \Fqsafe}$
$\line{8}{\bi}{\label_\auxb: \CircuitVal \deq \cs\dot\privateinput(\label)}$

$\line{9}{\bi}{\lc: \LinearCombination \equiv \sum_{i \in [254]}{2^i * \digestbits_{[\auxb, \Le], \safe}[i]}}$
$\line{10}{}{\cs\dot\assert(\lc = \label_\auxb)}$

$\line{11}{}{\return \label_\auxb}$

Code Comments:

  • Line 3: The constant $9984 = (2 + N_\parentlabels) * \ell_\block^\bit = (2 + 37) * 256 \thin$. The constant $160 = \ell_\block^\bit - \len(\layerindex) - \len(\nodeindex) = 256 - 32 - 64 \thin$.
  • Lines 4-5: The constant $256 = \ell_\block^\bit \thin$.
  • Lines 5-6: These are not reallocations.
  • Lines 6-7: The labeling function is $\Sha{254}$ not $\Sha{256}$.
  • Lines 6,9: The constant $254 = \ell_{\Fq, \safe}^\bit \thin$.

Little-Endian Bits Gadget

The function $\lebitsgadget$ receives a value $\value$ allocated within a constraint system $\cs$ and reallocates it as its $n$-bit little-endian binary representation.

Note that the number of bits returned must be at least the number of bits required to represent $\value$: $0 < \lceil \log_2(\value\dot\int) \rceil \leq n \thin$.

Implementation: bellman::gadgets::num::AllocatedNum::to_bits_le()

$\overline{\Function \lebitsgadget(}$
$\quad \cs: \RCS,$
$\quad \value_{\langle \auxb | \pubb \rangle}: \CircuitVal,$
$\quad n: \mathbb{Z}^+,$
$\underline{) \rightarrow \CircuitBit^{[n]} \qquad\quad}$
$\line{1}{\bi}{\assert(n \geq \lceil \log_2(\value\dot\int) \rceil)}$

$\line{2}{\bi}{\bits_\Le: \Bit^{[n]} = \llcorner \value_{\langle \auxb | \pubb \rangle}\dot\int \lrcorner_{2, \Le}}$
$\line{3}{\bi}{\bits_{[\auxb, \Le]}: \CircuitBit^{[n]} = [\ ]}$
$\line{4}{\bi}{\for \bit \in \bits_\Le:}$
$\line{5}{\bi}{\quad \bit_\auxb: \CircuitBit \overset{\diamond}{=} \cs\dot\privateinput(\bit)}$
$\line{6}{\bi}{\quad \bits_{[\auxb, \Le]}\dot\push(\bit_\auxb)}$

$\line{7}{\bi}{\lc: \LinearCombination \equiv \sum_{i \in [n]}{2^i * \bits_{[\auxb, \Le]}[i]}}$
$\line{8}{\bi}{\cs\dot\assert(\value_{\langle \auxb | \pubb \rangle} = \lc)}$

$\line{9}{\bi}{\return \bits_{[\auxb, \Le]}}$

Code Comments:

  • Line 2: This will pad $n - \lceil \log_2(\value\dot\int) \rceil$ zero bits onto the most significant end of $\llcorner \int \lrcorner_{2, \Le} \thin$.

Pack Bits as Input Gadget

The function $\packbitsasinputgadget$ receives an array of $n$ allocated little-endian bits $\bits_{[\auxb, \Le]}$, where $0 < n \leq \ell_\Fqsafe^\bit \thin$, and creates the field element $\packed$ whose little-endian binary representation is that of $\bits$. The gadget adds one public input $\packed_\pubb$ to the constraint system for the created field element.

$\overline{\Function \packbitsasinputgadget(}$
$\quad \cs: \RCS,$
$\quad \bits_{[\auxb, \Le]}: \CircuitBit^{[n]},$
$\underline{) \rightarrow \CircuitVal \qquad\qquad\qquad\qquad\qquad\quad}$
$\line{1}{\bi}{\assert(0 < n \leq \ell_\Fqsafe^\bit)}$
$\line{2}{\bi}{\packed: \Fq = \bits_{[\auxb, \Le]} \as \Fq}$
$\line{3}{\bi}{\packed_\pubb \overset{\diamond}{=} \cs\dot\publicinput(\packed)}$
$\line{4}{\bi}{\lc: \LinearCombination \equiv \sum_{i \in [n]}{2^i * \bits_{[\auxb, \Le]}[i]}}$
$\line{5}{\bi}{\cs\dot\assert(\lc = \packed_\pub)}$
$\line{6}{\bi}{\return \packed_\pubb}$

Pick Gadget

The $\pickgadget$ is used to choose one of two allocated values, $\x$ and $\y$, based upon the value of a binary condition $\bit$.

If $\bit$ is set, the gadget will reallocate and return $\x$, otherwise if $\bit$ is not set, the gadget will reallocate and return $\y$.

The $\pickgadget$, when given two allocated values $\x, \y \in \Fq$ and an allocated boolean constrained value $\bit \in \Bit$, outputs the allocated value $\pick \in \{ \x, \y \}$ and adds the $\RCS$ quadratic constraint:

$\bi (\y - \x) * (\bit) = (\y - \pick)$

This table shows that for $\bit \in \Bit$ and $\x, \y \in \Fq$ that the constraint is satisfied for the outputted values of $\pick$.

$\bit$ $\pick$ $(\y - \x) * (\bit) = (\y - \pick)$
$1$ $\x$ $(\y-\x) * (1) = (\y-\x)$
$0$ $\y$ $(\y-\x) * (0) = (\y-\y)$
$\overline{\Function \pickgadget( \qquad\quad\bi}$
$\quad \cs: \RCS,$
$\quad \bit_\aap: \CircuitBit,$
$\quad \x_\aap: \CircuitVal,$
$\quad \y_\aap: \CircuitVal,$
$\underline{) \rightarrow \CircuitVal \qquad\qquad\qquad\qquad}$
$\line{1}{\bi}{\pick_\auxb: \CircuitVal \deq \if \bit_\aap\dot\int = 1:}$
$\quad\quad \cs\dot\privateinput(\x_\aap)$
$\quad\else:$
$\quad\quad \cs\dot\privateinput(\y_\aap)$

$\line{2}{\bi}{\lc_A: \LinearCombination \equiv \y_\aap - \x_\aap}$
$\line{3}{\bi}{\lc_B: \LinearCombination \equiv \bit_\aap}$
$\line{4}{\bi}{\lc_C: \LinearCombination \equiv \y_\aap - \pick_\auxb}$
$\line{5}{\bi}{\cs\dot\assert(\lc_A * \lc_B = \lc_C)}$

$\line{6}{\bi}{\return \pick_\auxb}$

Insert-2 Gadget

The $\insertgadget{2}$ inserts $\value$ into an array $\arr$ at index $\index$ and returns the inserted array of reallocated elements.

The gadget receives an array containing one allocated element $\arr[0]$ and a second allocated value $\value$ and returns the two element array containing the reallocations of the two values where the index of the reallocated $\value$ is at the index $\index$ argument in the returned 2-element array.

$\overline{\Function \insertgadget{2}(\qquad\qquad\bi}$
$\quad \cs: \RCS,$
$\quad \arr_\aap: \CircuitVal^{[1]},$
$\quad \value_\aap: \CircuitVal,$
$\quad \index_\auxb: \CircuitBitOrConst,$
$\underline{) \rightarrow \CircuitVal^{[2]} \qquad\qquad\qquad\qquad\qquad}$
$\line{1}{\bi}{\el_{\auxb, 0}: \CircuitVal \deq \pickgadget(\cs,\thin \index_\auxb,\thin \arr_\aap[0] \thin,\thin \value_\aap \thin)}$
$\line{2}{\bi}{\el_{\auxb, 1}: \CircuitVal \deq \pickgadget(\cs, \index_\auxb,\thin \value_\aap \thin,\thin \arr_\aap[0] \thin)}$
$\line{3}{\bi}{\return [\el_{\auxb, 0}, \el_{\auxb, 1}]}$

Insert-8 Gadget

The function $\insertgadget{8}$ inserts a value $\value$ into an array of 7 elements $\arr$ at index in the 8 element array given by $\indexbits$. The values returned in the 8-element array are reallocations of $\arr$ and $\value$.

Implementation: storage_proofs::core::gadgets::insertion::insert_8()

Note that the length of the $\indexbits$ argument is $3 = \log_2(8)$ bits, which is the number of bits required to represent an index in an array of 8 elements.

Additional Notation:

$\arr'$
The inserted array containing 8 reallocated values, the elements of the uninserted array $\arr$ and the insertion value $\value$.

$\nor_{\auxb \thin (b_0, b_1)}$
Set to true if neither $\indexbits[0]$ nor $\indexbits[1]$ are $1$.

$\and_{\auxb \thin (b_0, b_1)}$
Set to true if both $\indexbits[0]$ and $\indexbits[1]$ are $1$.

$\pick_{\auxb, i(b_0)}$
$\pick_{\auxb, i(b_0, b_1)}$
$\pick_{\auxb, i(b_0, b_1, b_2)}$

The pick for the $i^{th}$ element of the inserted array based upon the value of the first bit (least-significant), first and second bits, and the first, second and third bits respectively.

$b_i \equiv \indexbits_{[\aap, \Le]}[i]$
$\pick_{i(b_0)} \equiv \pick_{\auxb, i(b_0)}$
$\nor_{(b_0, b_1)} \equiv \nor_{\auxb \thin (b_0, b_1)}$
$\and_{(b_0, b_1)} \equiv \and_{\auxb \thin (b_0, b_1)}$
$\arr[i] \equiv \arr_{[\aap]}[i]$
$\arr'[i] \equiv \arr'_{[\auxb]}[i]$

For ease of notation the subscripts $_\auxb$ and $_\aap$ are left off everywhere except in the function signature and when allocating of a value within the circuit.

$\overline{\Function \insertgadget{8}( \qquad\qquad\qquad\qquad\qquad\quad}$
$\quad \cs: \RCS,$
$\quad \arr_{[\aap]}: \CircuitVal^{[7]},$
$\quad \value_\aap: \CircuitVal,$
$\quad \indexbits_{[\aap, \Le]}: \CircuitBitOrConst^{[3]},$
$\underline{) \rightarrow \CircuitVal^{[8]} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\bi}$
$\line{1}{\bi}{\nor_{(b_0, b_1)}: \CircuitBit \deq \norgadget(\cs,\thin b_0 \thin,\thin b_1)}$
$\line{2}{\bi}{\and_{\auxb \thin (b_0, b_1)}: \CircuitBit \deq \andgadget(\cs,\thin b_0 \thin,\thin b_1)}$

$\line{3}{\bi}{\arr'_{[\auxb]}: \CircuitVal^{[8]} = [\ ]}$

$\line{4}{\bi}{\pick_{\auxb, 0(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs,\thin \nor_{(b_0, b_1)},\thin \value,\thin \arr[0])}$
$\line{5}{\bi}{\pick_{\auxb, 0(b_0, b_1, b_3)}: \CircuitVal \deq \pickgadget(\cs,\thin b_2,\thin \arr[0],\thin \pick_{0(b_0, b_1)})}$
$\line{6}{\bi}{\arr'[0] = \pick_{0(b_0, b_1, b_3)}}$

$\line{7}{\bi}{\pick_{\auxb, 1(b_0)}: \CircuitVal \deq \pickgadget(\cs,\thin b_0,\thin \value,\thin \arr[0])}$
$\line{8}{\bi}{\pick_{\auxb, 1(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs,\thin b_1,\thin \arr[1],\thin \pick_{1(b_0)})}$
$\line{9}{\bi}{\pick_{\auxb, 1(b_0, b_1, b_3)}: \CircuitVal \deq \pickgadget(\cs,\thin b_2,\thin \arr[1],\thin \pick_{1(b_0, b_1)})}$
$\line{10}{}{\arr'[1] = \pick_{1(b_0, b_1, b_3)}}$

$\line{11}{}{\pick_{\auxb, 2(b_0)}: \CircuitVal \deq \pickgadget(\cs,\thin b_0,\thin \arr[2],\thin \value)}$
$\line{12}{}{\pick_{\auxb, 2(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs,\thin b_1,\thin \pick_{2(b_0)},\thin \arr[1])}$
$\line{13}{}{\pick_{\auxb, 2(b_0, b_1, b_2)}: \CircuitVal \deq \pickgadget(\cs,\thin b_2,\thin \arr[2],\thin \pick_{2(b_0, b_1)})}$
$\line{14}{}{\arr'[2] = \pick_{2(b_0, b_1, b_3)}}$

$\line{15}{}{\pick_{\auxb, 3(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs, \thin \and_{(b_0, b_1)}, \thin \value, \thin \arr[2])}$
$\line{16}{}{\pick_{\auxb, 3(b_0, b_1, b_2)}: \CircuitVal \deq \pickgadget(\cs, \thin b_2, \thin \arr[3], \thin \pick_{3(b_0, b_1)})}$
$\line{17}{}{\arr'[3] = \pick_{3(b_0, b_1, b_3)}}$

$\line{18}{}{\pick_{\auxb, 4(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs, \thin \nor_{(b_0, b_1)}, \thin \value, \thin \arr[4])}$
$\line{19}{}{\pick_{\auxb, 4(b_0, b_1, b_2)}: \CircuitVal \deq \pickgadget(\cs, \thin b_2, \thin \pick_{4(b_0, b_1)}, \thin \arr[3])}$
$\line{20}{}{\arr'[4] = \pick_{4(b_0, b_1, b_3)}}$

$\line{21}{}{\pick_{\auxb, 5(b_0)}: \CircuitVal \deq \pickgadget(\cs, \thin b_0, \thin \value, \thin \arr[4])}$
$\line{22}{}{\pick_{\auxb, 5(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs, \thin b_1, \thin \arr[5], \thin \pick_{5(b_0)})}$
$\line{23}{}{\pick_{\auxb, 5(b_0, b_1, b_2)}: \CircuitVal \deq \pickgadget(\cs, \thin b_2, \thin \pick_{5(b_0, b_1)}, \thin \arr[4])}$
$\line{24}{}{\arr'[5] = \pick_{5(b_0, b_1, b_3)}}$

$\line{25}{}{\pick_{\auxb, 6(b_0)}: \CircuitVal \deq \pickgadget(\cs, \thin b_0, \thin \arr[6], \thin \value)}$
$\line{26}{}{\pick_{\auxb, 6(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs, \thin b_1, \thin \pick_{6(b_0)}, \thin \arr[5])}$
$\line{27}{}{\pick_{\auxb, 6(b_0, b_1, b_2)}: \CircuitVal \deq \pickgadget(\cs, \thin b_2, \thin \pick_{6(b_0, b_1)}, \thin \arr[5])}$
$\line{28}{}{\arr'[6] = \pick_{6(b_0, b_1, b_3)}}$

$\line{27}{}{\pick_{\auxb, 7(b_0, b_1)}: \CircuitVal \deq \pickgadget(\cs, \thin \and_{(b_0, b_1)}, \thin \value, \thin \arr[6])}$
$\line{28}{}{\pick_{\auxb, 7(b_0, b_1, b_2)}: \CircuitVal \deq \pickgadget(\cs, \thin b_2, \thin \pick_{7(b_0, b_1)}, \thin \arr[6])}$
$\line{29}{}{\arr'[7] = \pick_{7(b_0, b_1, b_3)}}$

$\line{30}{}{\return \arr'}$

AND Gadget

The function $\andgadget$ returns an allocated bit $1$ if both allocated bit arguments $\x$ and $\y$ are $1$ and returns the allocated bit $0$ otherwise.

Implementation: bellman::gadgets::boolean::AllocatedBit::and()

The $\RCS$ quadratic constraint that is added by the $\andgadget$, when applied to two boolean constrained values $\x, \y \in \Bit$ and outputting a third boolean constrained value $\and \in \Bit$, is:

$\bi (\x) * (\y) = (\and)$

This table shows the satisfiablilty of the constraint for all values of $\x, \y \in \Bit$ and corresponding outputted values of $\and \in \Bit$.

$\x$ $\y$ $\and$ $(\x) * (\y) = (\and)$
$0$ $0$ $0$ $(0) * (0) = (0)$
$1$ $0$ $0$ $(1) * (0) = (0)$
$0$ $1$ $0$ $(0) * (1) = (0)$
$1$ $1$ $1$ $(1) * (1) = (1)$
$\overline{\Function \andgadget(\qquad}$
$\quad \cs: \RCS,$
$\quad \x_\aap: \CircuitBit,$
$\quad \y_\aap: \CircuitBit,$
$\underline{) \rightarrow \CircuitBit \qquad\qquad\bi}$
$\line{1}{\bi}{\and: \Bit = \x_\aap\dot\int \thin\AND\thin \y_\aap\dot\int}$
$\line{2}{\bi}{\and_\auxb: \CircuitBit \deq \cs\dot\privateinput(\and)}$
$\line{3}{\bi}{\cs\dot\assert(\x_\aap * \y_\aap = \and_\auxb)}$
$\line{4}{\bi}{\return \and_\auxb}$

NOR Gadget

The function $\norgadget$ returns an allocated bit $1$ if both allocated bit arguments $\x$ and $\y$ are $0$ and returns the allocated bit $0$ otherwise.

Implementation: bellman::gadgets::boolean::AllocatedBit::nor()

The $\RCS$ quadratic constraint that is added by $\norgadget$, when applied to two boolean constrained values $\x, \y \in \Bit$ and outputting a third boolean constrained value $\nor \in \Bit$, is:

$\bi (1 - \x) * (1 - \y) = (\nor)$

The following table shows the satisfiablilty of the constraint for all values of $\x, \y \in \Bit$ and corresponding outputted values for $\nor \in \Bit$.

$\x$ $\y$ $\nor$ $(1 - \x) * (1 - \y) = (\nor)$
$0$ $0$ $1$ $(1) * (1) = (1)$
$1$ $0$ $0$ $(0) * (1) = (0)$
$0$ $1$ $0$ $(1) * (0) = (0)$
$1$ $1$ $0$ $(0) * (0) = (0)$
$\overline{\Function \norgadget(\qquad}$
$\quad \cs: \RCS,$
$\quad \x_\aap: \CircuitBit,$
$\quad \y_\aap: \CircuitBit,$
$\underline{) \rightarrow \CircuitBit \qquad\qquad\bi}$
$\line{1}{\bi}{\nor: \Bit = \neg (\x_\aap\dot\int \OR \y_\aap\dot\int)}$
$\line{2}{\bi}{\nor_\auxb: \CircuitBit \deq \cs\dot\privateinput(\nor)}$
$\line{3}{\bi}{\lc_A: \LinearCombination \equiv 1 - \x_\aap}$
$\line{4}{\bi}{\lc_B: \LinearCombination \equiv 1 - \y_\aap}$
$\line{5}{\bi}{\lc_C: \LinearCombination \equiv \nor_\auxb}$
$\line{6}{\bi}{\cs\dot\assert(\lc_A * \lc_B = \lc_C)}$
$\line{7}{\bi}{\return \nor_\auxb}$
$$ \gdef\createporepbatch{\textsf{create\_porep\_batch}} \gdef\GrothProof{\textsf{Groth16Proof}} \gdef\Groth{\textsf{Groth16}} \gdef\GrothEvaluationKey{\textsf{Groth16EvaluationKey}} \gdef\GrothVerificationKey{\textsf{Groth16VerificationKey}} \gdef\creategrothproof{\textsf{create\_groth16\_proof}} \gdef\ParentLabels{\textsf{ParentLabels}} \gdef\or#1#2{\langle #1 | #2 \rangle} \gdef\porepreplicas{\textsf{porep\_replicas}} \gdef\postreplicas{\textsf{post\_replicas}} \gdef\winningpartitions{\textsf{winning\_partitions}} \gdef\windowpartitions{\textsf{window\_partitions}} \gdef\lebinrep#1{{\llcorner #1 \lrcorner_{2, \textsf{le}}}} \gdef\bebinrep#1{{\llcorner #1 \lrcorner_{2, \textsf{be}}}} \gdef\lebytesbinrep#1{{\llcorner #1 \lrcorner_{2, \textsf{le-bytes}}}} \gdef\feistelrounds{\textsf{feistel\_rounds}} \gdef\int{\textsf{int}} \gdef\lebytes{\textsf{le-bytes}} \gdef\lebytestolebits{\textsf{le\_bytes\_to\_le\_bits}} \gdef\lebitstolebytes{\textsf{le\_bits\_to\_le\_bytes}} \gdef\letooctet{\textsf{le\_to\_octet}} \gdef\byte{\textsf{byte}} \gdef\postpartitions{\textsf{post\_partitions}} \gdef\PostReplica{\textsf{PostReplica}} \gdef\PostReplicas{\textsf{PostReplicas}} \gdef\PostPartitionProof{\textsf{PostPartitionProof}} \gdef\PostReplicaProof{\textsf{PostReplicaProof}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\pad{\textsf{pad}} \gdef\octettole{\textsf{octet\_to\_le}} \gdef\packed{\textsf{packed}} \gdef\val{\textsf{val}} \gdef\bits{\textsf{bits}} \gdef\partitions{\textsf{partitions}} \gdef\Batch{\textsf{Batch}} \gdef\batch{\textsf{batch}} \gdef\postbatch{\textsf{post\_batch}} \gdef\postchallenges{\textsf{post\_challenges}} \gdef\Nonce{\textsf{Nonce}} \gdef\createvanillaporepproof{\textsf{create\_vanilla\_porep\_proof}} \gdef\PorepVersion{\textsf{PorepVersion}} \gdef\bedecode{\textsf{be\_decode}} \gdef\OR{\mathbin{|}} \gdef\indexbits{\textsf{index\_bits}} \gdef\nor{\textsf{nor}} \gdef\and{\textsf{and}} \gdef\norgadget{\textsf{nor\_gadget}} \gdef\andgadget{\textsf{and\_gadget}} \gdef\el{\textsf{el}} \gdef\arr{\textsf{arr}} \gdef\pickgadget{\textsf{pick\_gadget}} \gdef\pick{\textsf{pick}} \gdef\int{\textsf{int}} \gdef\x{\textsf{x}} \gdef\y{\textsf{y}} \gdef\aap{{\langle \auxb | \pubb \rangle}} \gdef\aapc{{\langle \auxb | \pubb | \constb \rangle}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\parentlabelsbits{\textsf{parent\_labels\_bits}} \gdef\label{\textsf{label}} \gdef\layerbits{\textsf{layer\_bits}} \gdef\labelbits{\textsf{label\_bits}} \gdef\digestbits{\textsf{digest\_bits}} \gdef\node{\textsf{node}} \gdef\layerindex{\textsf{layer\_index}} \gdef\be{\textsf{be}} \gdef\octet{\textsf{octet}} \gdef\reverse{\textsf{reverse}} \gdef\LSBit{\textsf{LSBit}} \gdef\MSBit{\textsf{MSBit}} \gdef\LSByte{\textsf{LSByte}} \gdef\MSByte{\textsf{MSByte}} \gdef\PorepPartitionProof{\textsf{PorepPartitionProof}} \gdef\PostPartitionProof{\textsf{PostPartitionProof}} \gdef\octetbinrep#1{{\llcorner #1 \lrcorner_{\lower{2pt}{2, \textsf{octet}}}}} \gdef\fieldelement{\textsf{field\_element}} \gdef\Fqsafe{{\mathbb{F}_{q, \safe}}} \gdef\elem{\textsf{elem}} \gdef\challenge{\textsf{challenge}} \gdef\challengeindex{\textsf{challenge\_index}} \gdef\uniquechallengeindex{\textsf{unique\_challenge\_index}} \gdef\replicaindex{\textsf{replica\_index}} \gdef\uniquereplicaindex{\textsf{unique\_replica\_index}} \gdef\nreplicas{\textsf{n\_replicas}} \gdef\unique{\textsf{unique}} \gdef\R{\mathcal{R}} \gdef\getpostchallenge{\textsf{get\_post\_challenge}} \gdef\verifyvanillapostproof{\textsf{verify\_vanilla\_post\_proof}} \gdef\BinPathElement{\textsf{BinPathElement}} \gdef\BinTreeDepth{\textsf{BinTreeDepth}} \gdef\BinTree{\textsf{BinTree}} \gdef\BinTreeProof{\textsf{BinTreeProof}} \gdef\bintreeproofisvalid{\textsf{bintree\_proof\_is\_valid}} \gdef\Bit{{\{0, 1\}}} \gdef\Byte{\mathbb{B}} \gdef\calculatebintreechallenge{\textsf{calculate\_bintree\_challenge}} \gdef\calculateocttreechallenge{\textsf{calculate\_octtree\_challenge}} \gdef\depth{\textsf{depth}} \gdef\dot{\textsf{.}} \gdef\for{\textsf{for }} \gdef\Function{\textbf{Function: }} \gdef\Fq{{\mathbb{F}_q}} \gdef\leaf{\textsf{leaf}} \gdef\line#1#2#3{\scriptsize{\textsf{#1.}#2}\ \normalsize{#3}} \gdef\missing{\textsf{missing}} \gdef\NodeIndex{\textsf{NodeIndex}} \gdef\nodes{\textsf{nodes}} \gdef\OctPathElement{\textsf{OctPathElement}} \gdef\OctTree{\textsf{OctTree}} \gdef\OctTreeDepth{\textsf{OctTreeDepth}} \gdef\OctTreeProof{\textsf{OctTreeProof}} \gdef\octtreeproofisvalid{\textsf{octtree\_proof\_is\_valid}} \gdef\path{\textsf{path}} \gdef\pathelem{\textsf{path\_elem}} \gdef\return{\textsf{return }} \gdef\root{\textsf{root}} \gdef\Safe{{\Byte^{[32]}_\textsf{safe}}} \gdef\sibling{\textsf{sibling}} \gdef\siblings{\textsf{siblings}} \gdef\struct{\textsf{struct }} \gdef\Teq{\underset{{\small \mathbb{T}}}{=}} \gdef\Tequiv{\underset{{\small \mathbb{T}}}{\equiv}} \gdef\thin{{\thinspace}} \gdef\AND{\mathbin{\&}} \gdef\MOD{\mathbin{\%}} \gdef\createproof{{\textsf{create\_proof}}} \gdef\layer{\textsf{layer}} \gdef\nodeindex{\textsf{node\_index}} \gdef\childindex{\textsf{child\_index}} \gdef\push{\textsf{push}} \gdef\index{\textsf{index}} \gdef\leaves{\textsf{leaves}} \gdef\len{\textsf{len}} \gdef\ColumnProof{\textsf{ColumnProof}} \gdef\concat{\ \|\ } \gdef\inputs{\textsf{inputs}} \gdef\Poseidon{\textsf{Poseidon}} \gdef\bi{\ \ } \gdef\Bool{{\{\textsf{True}, \textsf{False}\}}} \gdef\curr{\textsf{curr}} \gdef\if{\textsf{if }} \gdef\else{\textsf{else}} \gdef\proof{\textsf{proof}} \gdef\Sha#1{\textsf{Sha#1}} \gdef\ldotdot{{\ldotp\ldotp}} \gdef\as{\textsf{ as }} \gdef\bintreerootgadget{\textsf{bintree\_root\_gadget}} \gdef\octtreerootgadget{\textsf{octtree\_root\_gadget}} \gdef\cs{\textsf{cs}} \gdef\RCS{\textsf{R1CS}} \gdef\pathbits{\textsf{path\_bits}} \gdef\missingbit{\textsf{missing\_bit}} \gdef\missingbits{\textsf{missing\_bits}} \gdef\pubb{\textbf{pub}} \gdef\privb{\textbf{priv}} \gdef\auxb{\textbf{aux}} \gdef\constb{\textbf{const}} \gdef\CircuitVal{\textsf{CircuitVal}} \gdef\CircuitBit{{\textsf{CircuitVal}_\Bit}} \gdef\Le{\textsf{le}} \gdef\privateinput{\textsf{private\_input}} \gdef\publicinput{\textsf{public\_input}} \gdef\deq{\mathbin{\overset{\diamond}{=}}} \gdef\alloc{\textsf{alloc}} \gdef\insertgadget#1{\textsf{insert\_#1\_gadget}} \gdef\block{\textsf{block}} \gdef\shagadget#1#2{\textsf{sha#1\_#2\_gadget}} \gdef\poseidongadget#1{\textsf{poseidon\_#1\_gadget}} \gdef\refeq{\mathbin{\overset{{\small \&}}=}} \gdef\ptreq{\mathbin{\overset{{\small \&}}=}} \gdef\bit{\textsf{bit}} \gdef\extend{\textsf{extend}} \gdef\auxle{{[\textbf{aux}, \textsf{le}]}} \gdef\SpecificNotation{{\underline{\text{Specific Notation}}}} \gdef\repeat{\textsf{repeat}} \gdef\preimage{\textsf{preimage}} \gdef\digest{\textsf{digest}} \gdef\digestbytes{\textsf{digest\_bytes}} \gdef\digestint{\textsf{digest\_int}} \gdef\leencode{\textsf{le\_encode}} \gdef\ledecode{\textsf{le\_decode}} \gdef\ReplicaID{\textsf{ReplicaID}} \gdef\replicaid{\textsf{replica\_id}} \gdef\replicaidbits{\textsf{replica\_id\_bits}} \gdef\replicaidblock{\textsf{replica\_id\_block}} \gdef\cc{\textsf{::}} \gdef\new{\textsf{new}} \gdef\lebitsgadget{\textsf{le\_bits\_gadget}} \gdef\CircuitBitOrConst{{\textsf{CircuitValOrConst}_\Bit}} \gdef\createporepcircuit{\textsf{create\_porep\_circuit}} \gdef\CommD{\textsf{CommD}} \gdef\CommC{\textsf{CommC}} \gdef\CommR{\textsf{CommR}} \gdef\CommCR{\textsf{CommCR}} \gdef\commd{\textsf{comm\_d}} \gdef\commc{\textsf{comm\_c}} \gdef\commr{\textsf{comm\_r}} \gdef\commcr{\textsf{comm\_cr}} \gdef\assert{\textsf{assert}} \gdef\asserteq{\textsf{assert\_eq}} \gdef\TreeDProof{\textsf{TreeDProof}} \gdef\TreeRProof{\textsf{TreeRProof}} \gdef\TreeR{\textsf{TreeR}} \gdef\ParentColumnProofs{\textsf{ParentColumnProofs}} \gdef\challengebits{\textsf{challenge\_bits}} \gdef\packedchallenge{\textsf{packed\_challenge}} \gdef\PartitionProof{\textsf{PartitionProof}} \gdef\u#1{\textsf{u#1}} \gdef\packbitsasinputgadget{\textsf{pack\_bits\_as\_input\_gadget}} \gdef\treedleaf{\textsf{tree\_d\_leaf}} \gdef\treerleaf{\textsf{tree\_r\_leaf}} \gdef\calculatedtreedroot{\textsf{calculated\_tree\_d\_root}} \gdef\calculatedtreerleaf{\textsf{calculated\_tree\_r\_leaf}} \gdef\calculatedcommd{\textsf{calculated\_comm\_d}} \gdef\calculatedcommc{\textsf{calculated\_comm\_c}} \gdef\calculatedcommr{\textsf{calculated\_comm\_r}} \gdef\calculatedcommcr{\textsf{calculated\_comm\_cr}} \gdef\layers{\textsf{layers}} \gdef\total{\textsf{total}} \gdef\column{\textsf{column}} \gdef\parentcolumns{\textsf{parent\_columns}} \gdef\columns{\textsf{columns}} \gdef\parentlabel{\textsf{parent\_label}} \gdef\label{\textsf{label}} \gdef\calculatedtreecleaf{\textsf{calculated\_tree\_c\_leaf}} \gdef\calculatedcolumn{\textsf{calculated\_column}} \gdef\parentlabels{\textsf{parent\_labels}} \gdef\drg{\textsf{drg}} \gdef\exp{\textsf{exp}} \gdef\parentlabelbits{\textsf{parent\_label\_bits}} \gdef\parentlabelblock{\textsf{parent\_label\_block}} \gdef\Bits{\textsf{ Bits}} \gdef\safe{\textsf{safe}} \gdef\calculatedlabel{\textsf{calculated\_label}} \gdef\createlabelgadget{\textsf{create\_label\_gadget}} \gdef\encodingkey{\textsf{encoding\_key}} \gdef\encodegadget{\textsf{encode\_gadget}} \gdef\TreeC{\textsf{TreeC}} \gdef\value{\textsf{value}} \gdef\encoded{\textsf{encoded}} \gdef\unencoded{\textsf{unencoded}} \gdef\key{\textsf{key}} \gdef\lc{\textsf{lc}} \gdef\LC{\textsf{LC}} \gdef\LinearCombination{\textsf{LinearCombination}} \gdef\one{\textsf{one}} \gdef\constraint{\textsf{constraint}} \gdef\proofs{\textsf{proofs}} \gdef\merkleproofs{\textsf{merkle\_proofs}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\challenges{\textsf{challenges}} \gdef\pub{\textsf{pub}} \gdef\priv{\textsf{priv}} \gdef\last{\textsf{last}} \gdef\TreeRProofs{\textsf{TreeRProofs}} \gdef\post{\textsf{post}} \gdef\SectorID{\textsf{SectorID}} \gdef\winning{\textsf{winning}} \gdef\window{\textsf{window}} \gdef\Replicas{\textsf{Replicas}} \gdef\P{\mathcal{P}} \gdef\V{\mathcal{V}} \gdef\ww{{\textsf{winning}|\textsf{window}}} \gdef\replicasperk{{\textsf{replicas}/k}} \gdef\replicas{\textsf{replicas}} \gdef\Replica{\textsf{Replica}} \gdef\createvanillapostproof{\textsf{create\_vanilla\_post\_proof}} \gdef\createpostcircuit{\textsf{create\_post\_circuit}} \gdef\ReplicaProof{\textsf{ReplicaProof}} \gdef\aww{{\langle \ww \rangle}} \gdef\partitionproof{\textsf{partition\_proof}} \gdef\replicas{\textsf{replicas}} \gdef\getdrgparents{\textsf{get\_drg\_parents}} \gdef\getexpparents{\textsf{get\_exp\_parents}} \gdef\DrgSeed{\textsf{DrgSeed}} \gdef\DrgSeedPrefix{\textsf{DrgSeedPrefix}} \gdef\FeistelKeysBytes{\textsf{FeistelKeysBytes}} \gdef\porep{\textsf{porep}} \gdef\rng{\textsf{rng}} \gdef\ChaCha#1{\textsf{ChaCha#1}} \gdef\cc{\textsf{::}} \gdef\fromseed{\textsf{from\_seed}} \gdef\buckets{\textsf{buckets}} \gdef\meta{\textsf{meta}} \gdef\dist{\textsf{dist}} \gdef\each{\textsf{each}} \gdef\PorepID{\textsf{PorepID}} \gdef\porepgraphseed{\textsf{porep\_graph\_seed}} \gdef\utf{\textsf{utf8}} \gdef\DrgStringID{\textsf{DrgStringID}} \gdef\FeistelStringID{\textsf{FeistelStringID}} \gdef\graphid{\textsf{graph\_id}} \gdef\createfeistelkeys{\textsf{create\_feistel\_keys}} \gdef\FeistelKeys{\textsf{FeistelKeys}} \gdef\feistelrounds{\textsf{feistel\_rounds}} \gdef\feistel{\textsf{feistel}} \gdef\ExpEdgeIndex{\textsf{ExpEdgeIndex}} \gdef\loop{\textsf{loop}} \gdef\right{\textsf{right}} \gdef\left{\textsf{left}} \gdef\mask{\textsf{mask}} \gdef\RightMask{\textsf{RightMask}} \gdef\LeftMask{\textsf{LeftMask}} \gdef\roundkey{\textsf{round\_key}} \gdef\beencode{\textsf{be\_encode}} \gdef\Blake{\textsf{Blake2b}} \gdef\input{\textsf{input}} \gdef\output{\textsf{output}} \gdef\while{\textsf{while }} \gdef\digestright{\textsf{digest\_right}} \gdef\xor{\mathbin{\oplus_\text{xor}}} \gdef\Edges{\textsf{ Edges}} \gdef\edge{\textsf{edge}} \gdef\expedge{\textsf{exp\_edge}} \gdef\expedges{\textsf{exp\_edges}} \gdef\createlabel{\textsf{create\_label}} \gdef\Label{\textsf{Label}} \gdef\Column{\textsf{Column}} \gdef\Columns{\textsf{Columns}} \gdef\ParentColumns{\textsf{ParentColumns}} % `\tern` should be written as % \gdef\tern#1?#2:#3{#1\ \text{?}\ #2 \ \text{:}\ #3} % but that's not possible due to https://github.com/KaTeX/KaTeX/issues/2288 \gdef\tern#1#2#3{#1\ \text{?}\ #2 \ \text{:}\ #3} \gdef\repeattolength{\textsf{repeat\_to\_length}} \gdef\verifyvanillaporepproof{\textsf{verify\_vanilla\_porep\_proof}} \gdef\poreppartitions{\textsf{porep\_partitions}} \gdef\challengeindex{\textsf{challenge\_index}} \gdef\porepbatch{\textsf{porep\_batch}} \gdef\winningchallenges{\textsf{winning\_challenges}} \gdef\windowchallenges{\textsf{window\_challenges}} \gdef\PorepPartitionProof{\textsf{PorepPartitionProof}} \gdef\TreeD{\textsf{TreeD}} \gdef\TreeCProof{\textsf{TreeCProof}} \gdef\Labels{\textsf{Labels}} \gdef\porepchallenges{\textsf{porep\_challenges}} \gdef\postchallenges{\textsf{post\_challenges}} \gdef\PorepChallengeSeed{\textsf{PorepChallengeSeed}} \gdef\getporepchallenges{\textsf{get\_porep\_challenges}} \gdef\getallparents{\textsf{get\_all\_parents}} \gdef\PorepChallengeProof{\textsf{PorepChallengeProof}} \gdef\challengeproof{\textsf{challenge\_proof}} \gdef\PorepChallenges{\textsf{PorepChallenges}} \gdef\replicate{\textsf{replicate}} \gdef\createreplicaid{\textsf{create\_replica\_id}} \gdef\ProverID{\textsf{ProverID}} \gdef\replicaid{\textsf{replica\_id}} \gdef\generatelabels{\textsf{generate\_labels}} \gdef\labelwithdrgparents{\textsf{label\_with\_drg\_parents}} \gdef\labelwithallparents{\textsf{label\_with\_all\_parents}} \gdef\createtreecfromlabels{\textsf{create\_tree\_c\_from\_labels}} \gdef\ColumnDigest{\textsf{ColumnDigest}} \gdef\encode{\textsf{encode}} \gdef\sector{\textsf{sector}} $$

SDR Notation, Constants, and Types

General Notation

$\mathbb{T}^{[n]}$
An array of $n$ elements of type $\mathbb{T}$.

$V[i]$
The element of array $V$ at index $i$. All indexes in this document (unless otherwise noted) start at $0$.

$[n] \equiv [0, n - 1] = 0, 1, \ldots, n - 1$
The range of integers from $0$ up to, but not including, $n$. Despite the use of square brackets $n$ is not included in this range.

$\int: [n]$
The type $[n]$ denotes $\int$ as being an integer in the range $[n] \equiv 0, \ldots, n - 1 \thin$.

$\Byte$
The byte type, an integer in $[256]$.

$\Bit^{[n]}$
A bit string of length $n$.

$0^{[n]}$
$1^{[n]}$
Creates a bit-string where each of the $n$ bits is set to $0$ or $1$ respectively.

$\u{32} = \Bit^{[32]}$
$\u{64} = \Bit^{[64]}$
32 and 64-bit unsigned integers equipped with standard arithmetic operations.

$\u{64}_{(17)}$
A 64-bit unsigned integer where only the least significant 17 bits are utilized (can be $0$ or $1$, the remaining unutilized 47 bits are unused and set to $0$).

$\Fq \equiv [q]$
An element of the scalar field that arises from BLS12-381’s subgroup of prime order $q$. Where $q$, given in hex and decimal, is:

$\quad q = \text{73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001}_{16}$
$\quad q = 52435875175126190479447740508185965837690552500527637822603658699938581184513_{10}$

$[a, b] = a, a + 1, \ldots, b$
The range of integers from $a$ up to and including $b$ (both endpoints are inclusive). There is an ambiguity in notation between this and the construction of a two element array. The notation $[a, b]$ always refers to a range except in the cases where we pass a two element array as an argument into the hash functions $\Sha{254}_2$ and $\Poseidon_2$.

$V[a..b] = V_a, \ldots, V_{b - 1}$
The slice of elements of array $V$ from index $a$ (inclusive) up to index $b$ (noninclusive).

$V[..n] \equiv V[0 .. n] = V_0, \ldots, V_{n - 1}$
A slice of $V$ starting at the first element (inclusive) and ending at the element at index $n$ (noninclusive).

$V[n..] \equiv V[n..\len(V)] = V_n, \dots, V_{\len(V) - 1}$
A slice of $V$ containing the elements starting at index $n$ (inclusive) up to and including the last element.

$M[r][:]$
The $r^{th}$ row of matrix $M$ as an array.

$M[:][c] = [\thin M[r][c]\ |\ \forall r \in [n_\text{rows}] \thin]$
The $c^{th}$ column of matrix $M$ (having $n_\text{rows}$ rows) as a flattened array.

$\mathbb{T}^{[n]} \concat \mathbb{T}^{[m]} \rightarrow \mathbb{T}^{[n + m]}$
Concatenates two arrays (whose elements are the same type) producing an array of type $\mathbb{T}^{[n + m]} \thin$.

$a \MOD b$
Integer $a$ modulo integer $b$. Maps $a$ into the range $0, \ldots, b - 1 \thin$.

$x \leftarrow\ S$
Samples $x$ uniformly from $S$.

$a \ll n$
Bitwise left shift of $a$ by $n$ bits. The leftmost bit of a unsigned integer is the most significant, for example $\int: \u{8} = 5_{10} = 00000101_2 \thin$.

$a \gg n$
Bitwise right shift of $a$ by $n$ bits. The rightmost bit of a unsigned integer is the least significant, for example $(\int \gg 1) \AND 1$ returns the 2nd least significant bit for an integer $\int: \u{32}, \u{64} \thin$.

$a \AND b:$
Bitwise AND of $a$ and $b$, where $a$ and $b$ are integers or bit-strings. The rightmost bit of a unsigned integer is the least significant, for example $\int \AND 1$ returns the least significant bit for an integer $\int: \u{32}, \u{64} \thin$.

$a \OR b$
Bitwise OR. Used in conjunction with bitwise shift to concatenate bit strings, e.g. $(101_2 \ll 3) \OR 10_2 = 10110_2 \thin$.

$a \xor b$
Bitwise XOR.

$a \oplus b$
$a \ominus b$
Field addition and subtraction respectively. $a$ and $b$ are field elements $\Fq$.

$a, b \Leftarrow X$
Destructures and instance of $\struct X\ \{a: \mathbb{A},\ b: \mathbb{B}\}$ into two variables $a$ and $b$, assigning to each variable the value (and type) of $X.a$ and $X.b$ respectively.

$\for \mathbb{T}\ \{\ x ,\thin y \ \} \in \textsf{Iterator}:$
Iterates over $\textsf{Iterator}$ and destructures each element into variables $x$ and $y$ local within the for-loop iteration’s scope. Each element of $\textsf{Iterator}$ is an instance of a structure $\mathbb{T}$.

$\sum_{i \in [n]}{\textsf{expr}_i}$
For each $i \in 0, \ldots, n - 1 \thin$, summates the value output by the $i^{th}$ expression $\textsf{expr}_i\thin$.

$\big\|_{x \hspace{1pt} \in \hspace{1pt} S} \thin x$
The concatenation of all elements $x$ of an iterator $S$.

$V = [x\ \textsf{as}\ \mathbb{T}\ |\ \forall x \in S]$
Array builder notation. The above creates an array $V$ of type $\mathbb{T}^{[\textsf{len}(S)]}$ where each $V[i] = S[i] \as \mathbb{T}$ .

$\llcorner \int \lrcorner_{2, \Le}: \Bit^{[\lceil \log_2(\int) \rceil]}$
$\llcorner \int \lrcorner_{8, \Le}: [8]^{[\lceil \log_8(\int) \rceil]}$
The little-endian binary and octal representations of an unsigned integer $\int$.

$\for \each \in \textsf{Iterator}:$
Iterates over each element of $\textsf{Iterator}$ and ignores each element’s value.

$\big[ \tern{x = 0}{a}{b \big}]$
A ternary expression. Returns $a$ if $x = 0$, otherwise returns $b$.

$\mathbb{T}^{[m]}\dot\extend(\mathbb{T}^{[n]}) \rightarrow \mathbb{T}^{[m + n]}$
Appends each value of $\mathbb{T}^{[n]}$ onto the end of the array $\mathbb{T}^{[m]}$. Equivalent to writing:

$\bi \line{1}{\bi}{x: \mathbb{T}^{[m]} = [x_1, \thin\ldots\thin, x_m]}$
$\bi \line{2}{\bi}{y: \mathbb{T}^{[n]} = [y_1, \thin\ldots\thin, y_n]}$
$\bi \line{3}{\bi}{x: \mathbb{T}^{[m + n]} = x \concat y}$

$\mathbb{T}^{[m]}\dot\repeattolength(n: \mathbb{N} \geq m) \rightarrow \mathbb{T}^{[n]}$
Repeats the values of array $\mathbb{T}^{[m]}$ until its length is $n$. Equivalent to writing:

$\line{1}{\bi}{x: \mathbb{T}^{[m]} = [x_1, \thin\ldots\thin, x_m]}$
$\line{2}{\bi}{\for i \in [n - m]:}$
$\line{3}{\bi}{\quad x\dot\push(x[i \MOD m])}$

Protocol Constants

Implementation: filecoin_proofs::constants

$\ell_\sector^\byte = 32\ \textsf{GiB} = 32 * 1024 * 1024 * 1024\ \textsf{Bytes}$
The byte length of a sector $D$.

$\ell_\node^\byte = \ell_\Fq^\byte = 32\ \textsf{Bytes}$
The byte length of a node.

$N_\nodes = \ell_\sector^\byte / \ell_\node^\byte = 2^{30}\ \textsf{Nodes}$
The number of nodes in each DRG. The protocol guarantees that the sector byte length is evenly divisible by the node byte length.

$\ell_\Fq^\bit = \lceil \log_2(q) \rceil = \lceil 254.85\ldots \rceil = 255 \Bits$
The number of bits required to represent any field element.

$\ell_\Fqsafe^\bit = \ell_\Fq^\bit - 1 = 254 \Bits$
The maximum integer number of bits that can be safely casted (casted without the possibility of failure) into a field element.

$\ell_\block^\bit = 256 \Bits$
The bit length of a $\Sha{256}$ block.

$d_\drg = 6$
The degree of each DRG. The number of DRG parents generated for each node. The total number of parents generated for nodes in the first Stacked DRG layer.

$d_\meta = d_\drg - 1 = 5$
The degree of the DRG metagraph. The number of DRG parents generated using a metagraph.

$d_\exp = 8$
The degree of each expander. The number of expander parents generated for each node not in the first Stacked-DRG layer.

$d_\total = d_\drg + d_\exp = 14$
The total number of parents generated for all nodes not in the first Stacked-DRG layer.

$N_\expedges = d_\exp * N_\nodes = 2^{33} \Edges$
The number of edges per expander graph.

$\ell_\expedge^\bit = \log_2(N_\expedges) = 33 \Bits$
The number of bits required to represent the index of an expander edge.

$\ell_\mask^\bit = \lceil \ell_\expedge^\bit / 2 \rceil = 17 \Bits$
The number of bits that are masked by a Feistel network bitmask.

$\RightMask: \u{64}_{(17)} = 2^{\ell_\mask^\bit} - 1 \qquad\quad\quad\bi\ = 0000000000000000011111111111111111_2$
$\LeftMask: \u{64}_{(17 \ldotdot 34)} = \RightMask \ll \ell_\mask^\bit = 1111111111111111100000000000000000_2$
The Feistel network’s right-half and left-half bitmasks. Each bitmask contains $\ell_\mask^\bit$ bits set to $1$. Both bitmasks are represented in binary as $34 = 2 * \ell_\mask^\bit$ digits. Note that $\RightMask$’s lowest 17 bits are utilized and $\LeftMask$’s lowest 17th-34th bits are utilized.

$N_\feistelrounds = 3$
The number of rounds per Feistel network.

$N_\layers = 11$
The number of DRG layers in the Stacked DRG.

$N_\parentlabels = 37$
The number of parent labels factored into each node label.

$\BinTreeDepth = \log_2(N_\nodes) = 30$
$\OctTreeDepth = \log_8(N_\nodes) = 10$
The depth of a $\BinTree$ and $\OctTree$ respectively. The number of tree layers is the tree’s depth $+ 1 \thin$. The Merkle hash arity of trees are 2 and 8 respectively.

$N_{\poreppartitions / \batch} = 10$
$N_{\postpartitions / \batch, \P, \thin \winning} \leq \len(\PostReplicas_{\P, \batch})$
$N_{\postpartitions / \batch, \P, \thin \window} \leq \len(\PostReplicas_{\P, \batch})$
The number of partition proofs per PoRep, Winning PoSt, and Window PoSt proof batch. The number of PoSt partition proofs in a batch is specific to the size of the PoSt prover $\P$’s replica set $\PostReplicas_{\P, \batch}$ at the time of batch proof generation.

$N_{\porepreplicas / k} = 1$
$N_{\postreplicas / k, \winning} = 1$
$N_{\postreplicas / k, \window} = 2349$
The number of challenged replicas per PoRep, Winning PoSt, and Window PoSt partition proof.

$N_{\postreplicas / k \thin \aww} \in \{ N_{\postreplicas / k \thin, \winning}, N_{\postreplicas / k, \window} \}$
Notational shortand meaning “either Winning of Window PoSt, determined by context”.

$N_{\porepchallenges / k} \equiv N_{\porepchallenges / R} = 176$
The number of Merkle challenges per PoRep partition proof. PoRep partition proofs are generated using a single replica $R$.

$N_{\postchallenges / R, \thin \winning} = 66$
$N_{\postchallenges / R, \thin \window} = 10$
The number of Merkle challenges per challenged replica $R$ in a PoSt partition proof.

$\GrothEvaluationKey_{\langle \textsf{circ} \rangle} = \text{<set during}\ \textsf{circ} \text{'s trusted setup>}$
$\GrothVerificationKey_{\langle \textsf{circ} \rangle} = \text{<set during}\ \textsf{circ} \text{'s trusted setup>}$
The Groth16 keypair used to generate and verify SNARKs for a circuit definition $\textsf{circ}$ (PoRep, Winning PoSt, Window PoSt each for a given protocol sector size).

$\DrgStringID: {\Byte_\utf}^{[*]} = ``\text{Filecoin\_DRSample}"$
$\FeistelStringID: {\Byte_\utf}^{[*]} = ``\text{Filecoin\_Feistel}"$
The ID strings associated with the DRG and Feistel network.

$\DrgSeedPrefix_\PorepID: \Byte^{[28]} = \Sha{256}(\DrgStringID \concat \PorepID)[\ldotdot 28]$
Part of the RNG seed used to sample DRG parents for every PoRep having the version $\PorepID$. storage_proofs::core::crypto::derive_porep_domain_seed()

$\PorepVersion_\textsf{SDR,32GiB,v1}: \u{64} = 3$
The version ID of the PoRep in use. PoRep versions are parameterized by the triple: (PoRep proof system, sector-size, version number).

$\Nonce_\PorepVersion: \u{64} = 0$
Each $\PorepVersion$’s has an associated nonce used to generate the PoRep versions $\PorepID$. Currently all PoRep vefrsion’s have a nonce of $0$.

$\FeistelKeysBytes_\PorepID: \Byte^{[32]} = \Sha{256}(\FeistelStringID \concat \PorepID)$
The byte array representing the concatenation of the Feistel network’s $N_\feistelrounds$ 64-bit round keys. All PoRep’s corresponding to the version $\PorepID$ use the same Fesistel keys.
Implementation: storage_proofs::core::crypto::derive_porep_domain_seed()

$\FeistelKeys_\PorepID: \u{64}^{[N_\feistelrounds]}\thin =\thin [$
$\quad \ledecode(\FeistelKeysBytes_\PorepID[\ldotdot 8]) \thin,$
$\quad \ledecode(\FeistelKeysBytes_\PorepID[8 \ldotdot 16]) \thin,$
$\quad \ledecode(\FeistelKeysBytes_\PorepID[16 \ldotdot 24]) \thin,$
$]$

The Feistel round keys used by every PoRep having version $\PorepID$.
Implementation: storage_proofs::porep::stacked::vanilla::graph::StackedGraph::new()

$\DrgSeed_{\PorepID, v}: \Byte^{[32]} = \DrgSeedPrefix_\PorepID \concat \leencode(v) \as \Byte^{[4]}$
The DRG parent sampling RNG’s seed for each node $v \in [N_\nodes]$ for a porep version $\PorepID \thin$.

Protocol Types and Notation

$\Safe = \Byte^{[32]}[..31] \concat (\Byte^{[32]}[31] \AND 00111111_2)$
An array of 32 bytes that utilizes only the first $\ell_\Fqsafe^\bit = 254$ bits. $\Safe$ types can be safely casted into field elements (casting is guaranteed not to fail). $\Safe$ is least significant byte first.

$\Fqsafe = \Fq \AND 1^{[254]}$
A field element that utilizes only its first $\ell_\Fqsafe^\bit = 254$ bits. $\Fqsafe$’s are created from casting a $\Safe$ into an $\Fq$ (the casting produces an $\Fqsafe \thin$).

$\leencode(\Fq) \rightarrow \Byte^{[32]}$
$\leencode(\Fqsafe) \rightarrow \Safe$
The produced byte array is least significant byte first.

$\NodeIndex: \u{32} \in [N_\nodes]$
The index of a node in a DRG. Node indexes are 32-bit integers.

$v: \NodeIndex$
$u: \NodeIndex$
Represents the node indexes for a child node $v$ and a parent $u$ of $v$.

$\mathbf{u}_\drg: \NodeIndex^{[d_\drg]} = \getdrgparents(v)$
An array DRG parents for a child node $v: \NodeIndex$. DRG parents are in $v$’s Stacked-DRG layer.

$\mathbf{u}_\exp: \NodeIndex^{[d_\exp]} = \getexpparents(v)$
An array of expander parents for a child node $v: \NodeIndex$. Expander parents are in the Stacked-DRG layer preceding $v$’s.

$\mathbf{u}_\total: \NodeIndex^{[d_\total]} = \getallparents(v) = \mathbf{u}_\drg \concat \mathbf{u}_\exp$
An array containing both the DRG and Expander parents for a child node $v: \NodeIndex$. The first $d_\drg$ elements of $\mathbf{u}_\total$ are $v$’s DRG parents and the last $d_\exp$ elements are $v$’s expander parents.

$p \in [\len(\mathbf{u}_{\langle \drg | \exp | \total \rangle})]$
The index of a parent in a parent node array $\mathbf{u}_{\langle \drg | \exp | \total \rangle}$. $p$ is not the same as a parent node’s index $u: \NodeIndex$.

$l \in [N_\layers]$
The index of a layer in the Stacked DRG. This document indexes layers a starting at $0$ not $1$. Within the context of Merkle trees and proofs, $l$ may denote a tree layer $l \in [0, \langle \BinTreeDepth | \OctTreeDepth \rangle] \thin$.

$l_v \in [N_\layers]$
The Stacked-DRG layer that a node $v$ resides in.

$k \in [N_{\poreppartitions / \batch}]$
$k \in [N_{\postpartitions / \batch, \P, \aww}]$
The index of a PoRep or PoSt partition proof within a proof batch.

$c: \NodeIndex$
A Merkle challenge.

$\R$
A random value.

$\P, \V$
A prover and verifier respectively.

$X^\dagger$
The superscript $^\dagger$ denotes an unverified value, for example when verifying a proof.

Merkle Trees

Filecoin utilizes two Merkle tree types, a binary tree type $\BinTree$ and octal tree type $\OctTree$.

$\BinTree{\bf \sf s}$ use the $\Sha{254}_2$ hash function (two inputs per hash function call, tree nodes have type $\Safe$). $\OctTree{\bf \sf s}$ use the hash function $\Poseidon_8$ (eight inputs per hash function call, tree nodes have type $\Fq$).

Implementation:

$\struct \BinTree\ \{$
$\quad \leaves \sim \layer_0: \Safe^{[N_\nodes]},$
$\quad \layer_1: \Safe^{[N_\nodes / 2]} \thin,$
$\quad \layer_2: \Safe^{[N_\nodes / 2^2]} \thin,$
$\quad \dots\ ,$
$\quad \layer_{\BinTreeDepth - 1}: \Safe^{[2]},$
$\quad \root \sim \layer_\BinTreeDepth: \Safe,$
$\}$

A binary Merkle tree with hash function arity-2 (each Merkle hash operates on two input values). The hash function for $\BinTree$’s is $\Sha{254}_2$ ($\Sha{256}$ that operates on two 32-byte inputs where the last two bits of the last byte of their $\Sha{256}$ digest have been zeroed to produce a $\Safe$ digest). The fields $\layer_0, \ldots, \layer_\BinTreeDepth$ are arrays containing each tree layer’s node labels. The fields $\leaves$ and $\root$ are aliases for the fields $\layer_0$ and $\layer_\BinTreeDepth$ respectively.

$\struct \OctTree\ \{$
$\quad \leaves \sim \layer_0: \Fq^{[N_\nodes]},$
$\quad \layer_1: \Fq^{[N_\nodes / 8]} \thin,$
$\quad \layer_2: \Fq^{[N_\nodes / 8^2]} \thin,$
$\quad \dots\ ,$
$\quad \layer_{\OctTreeDepth - 1}: \Fq^{[8]},$
$\quad \root \sim \layer_\OctTreeDepth: \Fq,$
$\}$

An octal Merkle tree with hash function arity-8 (each Merkle hash operates on 8 input values). The hash function for $\OctTree$’s is $\Poseidon_8$. The fields $\layer_0, \ldots, \layer_{10}$ are arrays containing each tree layer’s node labels. The fields $\leaves$ and $\root$ are aliases for the fields $\layer_0$ and $\layer_{10}$ respectively.

Merkle Proofs

Implementation:

$\struct \BinTreeProof\ \{$
$\quad \leaf: \Safe \thin,$
$\quad \root: \Safe \thin,$
$\quad \path: \BinPathElement^{[\BinTreeDepth]} \thin,$
$\}$

A $\BinTree$ Merkle proof generated for a challenge. The notation $\BinTreeProof_c$ denotes a proof generated for a challenge $c$. The field $\path$ contains one element for tree layers $0, \ldots, \BinTreeDepth - 1$ (there is no $\path$ element for the root layer). The path element $\BinTreeProof\dot\path[l]$ for tree layers $l \in 0, \ldots, \BinTreeDepth - 1$ contains one node label in layer $l$ that the Merkle proof verifier will use to calculate the label for a node in layer $l + 1 \thin$. Each path element is for a distinct tree layer.

$\struct \BinPathElement\ \{$
$\quad \sibling: \Safe \thin,$
$\quad \missing: \Bit \thin,$
$\}$

A single element in $\BinTreeProof\dot\path$ associated with a single $\BinTree$ tree layer $l$. Contains the information necessary for a Merkle proof verifier to calculate the label for a node in tree layer $l + 1$. The field $\sibling$ contains the label that the Merkle proof verifier did not calculate for layer $l$. The Merkle verifier applies the hash function $\Sha{254}_2$ to an array of two elements in layer $l$ to produce the label for a node in layer $l + 1$. The order of the elements in each Merkle hash function’s 2-element array is given by the field $\missing$. If $\missing = 0$, then $\sibling$ is at index $1$ in the Merkle hash inputs array. Conversely, if $\missing = 1$ the field $\sibling$ is at index $0$ in the 2-element Merkle hash inputs array.

$\struct \OctTreeProof\ \{$
$\quad \leaf: \Fq \thin,$
$\quad \root: \Fq \thin,$
$\quad \path: \OctPathElement^{[\OctTreeDepth]} \thin,$
$\}$

A $\OctTree$ Merkle proof generated for a challenge. The notation $\OctTreeProof_c$ denotes a proof generated for a challenge $c$. The field $\path$ contains one element for tree layers $0, \ldots, \OctTreeDepth - 1$ (there is no $\path$ element for the root layer). The path element $\OctTreeProof\dot\path[l]$ for tree layers $l \in 0, \ldots, \OctTreeDepth - 1$ contains one node label in layer $l$ that the Merkle proof verifier will use to calculate the label for a node in layer $l + 1 \thin$. Each path element is for a distinct tree layer. If $l = 0$ the verifier inserts $\BinTreeProof\dot\leaf$ into the first path element’s Merkle hash inputs array at index $\BinTreeProof\dot\path[0]\dot\missing \thin$.

$\struct \OctPathElement\ \{$
$\quad \siblings: \Fq^{[7]} \thin,$
$\quad \missing: [8] \thin,$
$\}$

A single element in $\OctTreeProof\dot\path$ associated with a single $\OctTree$ tree layer $l$. Contains the information necessary for a Merkle proof verifier to calculate the label for a node in tree layer $l + 1$. The field $\sibling$ contains the label that the Merkle proof verifier did not calculate for layer $l$. The Merkle verifier applies the hash function $\Poseidon_8$ to an array of eight elements in layer $l$ to produce the label for a node in layer $l + 1$. The order of the elements in each Merkle hash function’s 8-element array is given by the field $\missing$. $\missing$ is an index in an array of 8 elements telling the verifier at which index in the Merkle hash inputs array that the node label calculated by the verifier for this path element’s layer is to be inserted into $\siblings$. Given an $\OctPathElement = \OctTreeProof\dot\path[l]$ in tree layer $l$, the node label to be inserted into the hash inputs array at index $\missing$ was calculated using the path element $\OctTreeProof\dot\path[l]$ (or $\OctTreeProof\dot\leaf$ if $l = 0$).

Graphs

$\ExpEdgeIndex: \u{64}_{(33)} \equiv [N_\expedges]$
The index of an edge in an expander graph. Note that $\ell_\expedge^\bit = 33 \thin$.

PoRep
$\PorepID: \Byte^{[32]} =$
$\quad \leencode(\PorepVersion) \as \Byte^{[8]}$
$\quad \|\ \leencode(\Nonce_\PorepVersion) \as \Byte^{[8]}$
$\quad \|\ 0^{[16]}$

A unique 32-byte ID assigned to each PoRep version. Each PoRep version is defined by a distinct triple of parameters: (PoRep proof system, sector-size, version number). All PoRep’s having the same PoRep version triple have the same $\PorepID$. The notation $0^{[16]}$ denotes 16 zero bytes (not bits).
Implementation: filecoin_proofs_api::registry::RegisteredSealProof::porep_id()

$\SectorID: \u{64}$
A unique 64-bit ID assigned to each distinct sector $D$.

$\ReplicaID: \Fqsafe$
A unique 32-byte ID assigned to each distinct replica $R$.
Implementation: storage_proofs::porep::stacked::vanilla::params::generate_replica_id()

$\R_\replicaid: \Byte^{[32]}$
A random value used to generate a replica $R$’s $\ReplicaID$.

$\TreeD: \BinTree$
$\TreeC: \OctTree$
$\TreeR: \OctTree$
Merkle trees built over a sector $D$, the column digests of a $\Labels$ matrix, and a replica $R$ respectively. $\TreeD$ uses the Merkle hash function $\Sha{254}_2$ while $\TreeC$ and $\TreeR$ use the Merkle hash function $\Poseidon_8$. The leaves of $\TreeD$ are the array of $D_i: \Safe \in D$ for a sector $D$. The leaves of $\TreeC$ are the array of $\ColumnDigest$’s for a replica’s labeling $\Labels$. The leaves of $\TreeR$ are the array of $R_i: \Fq \in R$ for a replica $R$.

$\CommD: \Safe = \TreeD\dot\root$
$\CommC: \Fq = \TreeC\dot\root$
$\CommR: \Fq = \TreeR\dot\root$
$\CommCR: \Fq = \Poseidon_2([\CommC, \CommR])$
The Merkle roots of a $\TreeD$, $\TreeC$, and $\TreeR$ as well as the commitment $\CommCR$, a commitment to both a replicas’s Stacked-DRG labeling $\Labels$ and the replica $R$.

$\Label_{v, l}: \Fqsafe$
The label of a node $v$ in a Stacked-DRG layer $l \thin$. The label of a node in a Stacked-DRG layer is specific to a replica $R$’s labeling $\Labels \thin$. Labels are $\Fqsafe$ because the labeling function is $\Sha{254}$ which returns 254-bit/$\Safe$ digests which are converted into 254-bit/safe field elements.

$\Labels: {\Label^{[N_\nodes]}}^{[N_\layers]}$
An $N_\layers$ x $N_\nodes$ matrix containing the label of every node in a replica’s Stacked-DRG labeling $\Labels \thin$.

$\Column_v: \Label^{[N_\layers]} = \Labels[:][v]$
The label of a node $v$ in every Stacked-DRG layer (first layer’s label is at index $0$ in the column). A node column is specific to the labeling of a single replica.

$\ColumnDigest_v: \Fq = \Poseidon_{11}(\Column_v)$
The digest of a node $v$’s column in a replica’s Stacked-DRG. $\ColumnDigest{\bf \sf s}$ are used as the leaves for $\TreeC$. The set of $\ColumnDigest_v{\bf \sf s}$ for all DRG nodes $v \in [N_\nodes]$ is specific to a single replica’s labeling.

$\struct \ColumnProof\ \{$
$\quad \leaf: \Fq,$
$\quad \root: \Fq,$
$\quad \path: \OctPathElement^{[\OctTreeDepth]},$
$\quad \column: \Column,$
$\}$

A $\ColumnProof_c$ is an $\OctTreeProof_c$ adjoined with an additional field $\column_c$, the Merkle challenge $c$’s label in each Stacked-DRG layer. A valid $\ColumnProof$ has $\ColumnProof\dot\leaf: \ColumnDigest = \Poseidon_{11}(\ColumnProof\dot\column) \thin$.

$\ParentLabels_{\mathbf{u}_\drg}: \Label^{[d_\drg]} = [\Label_{u_\drg, l_0} \mid \forall u_\drg \in \mathbf{u}_\drg]$
$\ParentLabels_{\mathbf{u}_\total}: \Label^{[d_\total]} = [\Label_{u_\drg, l} \mid \forall u_\drg \in \mathbf{u}_\drg] \concat [\Label_{u_\exp, l - 1} \mid \forall u_\exp \in \mathbf{u}_\exp]$
The arrays of a node $v$’s (where $v$ is in Stacked-DRG layer $l_v$) DRG and total parent labels respectively (DRG parent labels are in layer $l_v$, expander parent labels are in layer $l_v - 1$). $\ParentLabels_{\mathbf{u}_\drg}$ is only called while labeling nodes in layer $l_0 = 0 \thin$.

$\ParentLabels_{\mathbf{u}_\drg}^\star: \Label^{[N_\parentlabels]} = \ParentLabels_{\mathbf{u}_\drg}\dot\repeattolength(N_\parentlabels)$
$\ParentLabels_{\mathbf{u}_\total}^\star: \Label^{[N_\parentlabels]} = \ParentLabels_{\mathbf{u}_\total}\dot\repeattolength(N_\parentlabels)$
The superscript $^\star$ denotes that $\ParentLabels$ has been expanded to length $N_\parentlabels \thin$.

$\R_{\porepchallenges, \batch}: \Byte^{[32]}$
Randomness used to generate the challenge set for a batch of PoRep proofs.

$\PorepChallenges_{R, k}: (\NodeIndex \setminus 0)^{[N_{\porepchallenges / k}]}$
The set of PoRep challenges for a replica $R$’s partition-$k$ partition proof. The first node index $0$ is not challenged in PoRep’s (the operator $\setminus$ is set subtraction).

$\struct \PorepChallengeProof_c\ \{$
$\quad \TreeDProof_c,$
$\quad \ColumnProof_c,$
$\quad \TreeRProof_c,$
$\quad \ParentColumnProofs_{\mathbf{u}_\total}: {\ColumnProof_u}^{[d_\total]},$
$\}$

The proof generated for each Merkle challenge $c$ in a PoRep partition proof. The field $\ParentColumnProofs_{\mathbf{u}_\total}$ stores the $\ColumnProof_u$ for each parent $u \in \mathbf{u}_\total$ of the challenge $c$.

$\PorepPartitionProof_{R, k}: \PorepChallengeProof^{[N_{\porepchallenges / k}]}$
A single PoRep partition proof for partition $k$ in a replica $R$’s PoRep proof batch.

PoSt

$\R_{\postchallenges, \batch \thin \aww}: \Fq$
Randomness used to generate the challenge set for a batch of PoSt proofs.

$\struct \PostReplica_\P\ \{$
$\quad \TreeR,$
$\quad \CommC,$
$\quad \CommCR,$
$\quad \SectorID,$
$\}$

A replica $R$ that a PoSt prover has access to. All fields are associated with $R$ at the time of PoSt proof batch generation.

$\PostReplicas_{\P, \batch \thin \aww}: {\PostReplica}^{[*]}$
The set of all distinct replicas that a PoSt prover $\P$ has knowledge of at time of a Winning or Window batch proof generation. PoSt provers in the Filecoin network may have different sized replica sets, thus $\PostReplicas$ is arbitrarily sized $*$.

$\PostReplicas_{\P, k \thin \aww}: {\PostReplica_{\P, \batch}}^{[0 < \ell \leq N_{\postreplicas/k \thin \aww}]}$
$\PostReplicas_{\P, k \thin \aww} =$
$\quad \PostReplicas_{\P, \batch \thin \aww}[k * N_{\postreplicas / k \thin \aww} \ldotdot (k + 1) * N_{\postreplicas / k \thin \aww}]$

The $k^{th}$ distinct slice of a PoSt prover $\P$’s total replica set $\PostReplicas_{\P, \batch}$ used to generate prover’s partition-$k$ Winning or Window PoSt proof in a batch. This set contains all replicas that are challenged in PoSt partition $k$. $\PostReplicas_{\P, k}$ does not contain padded replica proofs. The length of a PoSt prover’s total replica set may not be divisible by the number of challenged replica’s $N_{\postreplicas / k \thin \aww}$, thus the length of $\PostReplicas_{\P, k}$ is in the range $[1, N_{\postreplicas / k \thin \aww}] \thin$.

$\PostPartitionProof_{k \thin \aww}: {\PostReplicaProof_{R \thin \aww}}^{[N_{\postreplicas/k \thin \aww}]}$
A PoSt partition proof generated by a PoSt prover for their $k^{th}$ partition proof in their current batch of Winning or Window PoSt proofs. Each $\PostReplicaProof$ in the partition proof is associated with a unique challenged replica $R$ (unique across the entire batch). A $\PostPartitionProof$ may contain padded replica proofs to ensure that the partition proof has length $N_{\postreplicas / k \thin \aww} \thin$.

$\struct \PostReplicaProof_{R \thin \aww}\ \{$
$\quad \TreeRProofs: \TreeRProof^{[N_{\challenges/R \thin \aww}]} \thin,$
$\quad \CommC,$
$\}$

The proof for single replica $R$ challenged in a PoSt partition proof. All fields are associated with $R$.

$\struct \PostReplica_\V\ \{$
$\quad \SectorID,$
$\quad \CommCR,$
$\}$

The public information known to a PoSt verifier $\V$ for each challenged replica $R$ (distinct up to the PoSt batch) of the PoSt prover. $\SectorID$ and $\CommCR$ are associated with the replica $R$.

$\PostReplicas_{\V, k \thin \aww}: {\PostReplica_\V}^{[0 < \ell \leq N_{\postreplicas / k \thin \aww}]}$
The public information known to PoSt verifier $\V$ for each distinct replica $R$ in a PoSt prover’s partition-$k$ replica set $\Replicas_{\P, k} \thin$. The length of the partition’s replica set is the number of unique replica’s used to generate $\PartitionProof_{P, k \thin \aww}$ which may be less than $N_{\postreplicas / k \thin \aww} \thin$.

Type Conversions

$x \as \mathbb{T}$
Converts a value $x$ of type $\mathbb{X}$ into type $\mathbb{T}$. The $\as$ keyword takes precedence over arithmetic and bitwise operations, for example $a * b \as \mathbb{T}$ casts $b$ to type $\mathbb{T}$ before to multiplying by $a$.

$\Safe \as \Fq$
$\Safe \as \Fqsafe$
Converts a 32-byte array where only the lowest 254 bits are utilized into a prime field element. A $\Safe$ byte array is guaranteed to represent a valid field element.

$\Fq \as \Byte^{[32]}$
Converts a 32-byte value to a prime field element. This conversion is safe as all field elements can be represented using $\ell_\Fq^\bit = 255$ bits.

$\beencode(\mathbb{T}) \rightarrow \Byte^{[n]}$
$\leencode(\mathbb{T}) \rightarrow \Byte^{[n]}$
Big and little-endian encoding (big and little endian with respect to the byte order of the produced byte array) of a value having type $\mathbb{T}$ into an array of $n$ bytes. 32-bit integers $\u{32}$’s are encoded into 4 bytes and 64-bit integers $\u{64}$’s are encoded into 8 bytes.

$\bedecode(\Byte^{[n]}) \rightarrow \mathbb{T}$
$\ledecode(\Byte^{[n]}) \rightarrow \mathbb{T}$
Decodes big and little-endian byte arrays into a value of type $\mathbb{T}$.

$\Bit_\Le^{[0 < n \leq \ell_\Fq^\bit]} \quad \as \Fq$
$\Bit_\Le^{[0 < n \leq \ell_\Fqsafe^\bit]} \thin \as \Fqsafe$
Casts a little-endian array of $n$ bits into a field element ($\ell_\Fq^\bit = 255$) and safe field element ($\ell_\Fqsafe^\bit = 254$) respectively. Equivalent to writing:

$\bi \line{1}{\bi}{\bits_\Le: \Bit^{[n]} = \text{<some bits>}}$
$\bi \line{2}{\bi}{\fieldelement: \Fq = 0}$
$\bi \line{3}{\bi}{\for i \in [n]:}$
$\bi \line{4}{\bi}{\quad \fieldelement \mathrel{+}= 2^i * \bits_\Le[i]}$

$\bits_{[\auxb, \Le]} \as \Fq$
Creates a field element from an array of allocated bits in little-endian order. Equivalent to writing:

$\bi \line{1}{\bi}{\bits_{[\auxb, \Le]}: \CircuitBit^{[0 < n \leq \ell_\Fq^\bit]} = \text{<some allocated bits>}}$
$\bi \line{2}{\bi}{\fieldelement: \Fq = 0}$
$\bi \line{3}{\bi}{\for i \in [n]:}$
$\bi \line{4}{\bi}{\quad \fieldelement \mathrel{+}= 2^i * \bits_{[\auxb, \Le]}[i]\dot\int}$

R1CS and Circuit Notation and Types

$\RCS$
The type used to represent an instance of a rank-1 constraint system. Each $\RCS$ instance can be thought of as a structure containing two vectors, the primary and auxiliary assignments, and a system of quadratic constraints (a vector where each element is an equality of the form $\LinearCombination * \LinearCombination = \LinearCombination$, and each constraint polynomial’s variables are values allocated within an $\RCS$ assignments vectors). The $\RCS$ type is left opaque as its implementation is beyond the scope of this document.

$\RCS\dot\one_\pubb = \CircuitVal\ \{ \index_\text{primary}: 0, \int: 1\ \}$
Every $\RCS$ instance is instantiated with its first primary assignment being the multiplicative identity.

$\RCS\dot\publicinput(\Fq)$
$\RCS\dot\privateinput(\Fq)$
$\RCS\dot\assert(\LinearCombination * \LinearCombination = \LinearCombination)$

The $\RCS$ types has three methods, one for adding primary assignments, one for adding auxiliary assignments, and one for adding constraints.

$\struct \CircuitVal\ \{$
$\quad \index_{\langle \text{primary|auxiliary} \rangle}: \mathbb{N},$
$\quad \int: \Fq,$
$\}$

An instance of $\CircuitVal$ is a reference to a single field element, of value $\int$, allocated within a constraint system. The field $\int$ is a copy of the allocated value, arithmetic on $\int$ is not circuit arithmetic. The field $\index$ refers to the index of an allocated integer (a copy of $\int$) in either the primary or auxiliary assignments vectors of an $\RCS$ instance. Every unique wire in a circuit has a unique (up to the instance of $\RCS$) $\CircuitVal\dot\index$ in either the primary or auxiliary assignments vectors.

$\struct \CircuitVal_\Bit \equiv \CircuitVal\ \{$
$\quad \index_{\langle \text{primary|auxiliary} \rangle}: \mathbb{N},$
$\quad \int: \Fq \in \Bit,$
$\}$

A reference to an allocated bit, a boolean constrained value, within an $\RCS$ instance.

$\CircuitBitOrConst \equiv \{ \CircuitBit,\thin \Bit \}$
The type $\CircuitBitOrConst$ is an enum representing either an allocated bit or a constant unalloacted bit.

$\deq$
The “diamond-equals” sign shows that a value as been allocated within $\RCS$. If an assignment $=$ operates on $\CircuitVal{\bf \sf s}$ and does not have a diamond, then no value was allocated in a circuit.

$\value_\auxb: \CircuitVal \deq \RCS\dot\privateinput(\value \as \Fq)$
The subscript $_\auxb$ denotes $\value$ as being allocated within $\RCS$ and located in the auxiliary assignments vector.

$\value_\pubb: \CircuitVal \deq \RCS\dot\publicinput(\value \as \Fq)$
The subscript $_\pubb$ denotes an allocated value as being in an $\RCS$ primary assignments vector. The function $\RCS\dot\publicinput$ adds a public input $\value$ to the primary assignments vector (denoted $\value_\pubb$), allocates $\value$ within the auxiliary assignments vector (denoted $\value_\auxb$), and adds an equality constraint checking that the SNARK prover’s auxiliary assignment $\value_\auxb$ is equal to the verifiers’s public input $\value_\pubb$.

$\value_\aap: \CircuitVal$
The allocated $\value$ may be either an auxiliary or primary assignment.

$\bit_\pubb: \CircuitBit \deq \RCS\dot\publicinput(\Bit \as \Fq)$
$\bit_\auxb: \CircuitBit \deq \RCS\dot\privateinput(\Bit \as \Fq)$
The subscript $_\Bit$ in $\CircuitBit$ denotes an allocated value that has been boolean constrained. The functions $\RCS\dot\publicinput$ and $\RCS\dot\privateinput$ add the boolean constraint $(1 - \bit_\aap) * (\bit_\aap) = 0$ opaquely based upon whether or not the return type is $\CircuitVal$ or $\CircuitBit \thin$.

$\bits_{[\auxb]}: \CircuitBit^{[*]}$
The subscript $_{[\auxb]}$ denotes a value as being an array of allocated bits (boolean constrained circuit values).

$\bits_{[\auxb + \constb]}: \CircuitBitOrConst^{[*]}$
The subscript $_{[\auxb + \constb]}$ denotes a value as being an array of allocated bits and constant/unallocated boolean values $\Bit$.

$\bits_{[\auxb, \Le]}: \CircuitBit^{[*]}$
The subscript $_{[\auxb, \Le]}$ denotes an array of allocated bits as having little-endian bit order (least significant bit is at index 0 and the most significant bit is the last element in the array).

$\bits_{[\auxb, \lebytes]}: \CircuitBit^{[*]}$
The subscript $_{[\auxb, \lebytes]}$ denotes an array of allocated bits as having little-endian byte order (least significant byte first where the most significant bit of each byte is at index 0 in that byte’s 8-bit slice of the array).

$\arr_{[[\auxb]]}: {\CircuitBit^{[m]}}^{[n]}$
The subscript $_{[[\auxb]]}$ denotes an array where each element is an array of allocated bits.

$\LinearCombination(\CircuitVal_0, \ldots, \CircuitVal_n)$
Represents an unevaluated linear combination of allocated value variables and unallocated constants, for example the degree-1 polynomial $2 * \CircuitVal_0 + \CircuitVal_1 + 5$ (where $2$ and $5$ are unallocated constants) is a $\LinearCombination \thin$.

$\lc: \LinearCombination \equiv 0$
$\lc: \LinearCombination \equiv \CircuitVal_0 + \CircuitVal_1$
$\lc: \LinearCombination \equiv \sum_{i \in [3]}\ 2^i * \CircuitVal$
When linear combinations are instantiated they are not evaluated to a single integer value and stored in $\lc$. The equivalency notation $\equiv$ is used show that a linear combination is not evaluated to a single value, but represents a symbolic polynomial over allocated $\CircuitVal$’s. In the above examples the values $0$ and $2^i$ are unallocated constants.

Protocol Assumptions

Values are chosen for Protocol constants such that the following are true:

  • The sector size $\ell_\sector^\byte$ is divisible by the node size $\ell_\node^\byte$.
  • The number of nodes always fits an unsigned 32-bit integer $N_\nodes \leq 2^{32}$, and thus node indexes are representable using 32-bit unsigned integers.
  • Every distinct contiguous 32-byte slice of a sector $D_i \in D$ represents a valid prime field element $\Fq$ (every $D_i$ is a valid $\Safe$, this is accomplished via sector preprocessing).

Hash Functions

$\Sha{256}(\Bit^{[*]}) \rightarrow \Byte^{[32]}$
The $\textsf{Sha256}$ hash function operates on preimages of an arbitrary number of bits and returns 32-byte array (256 bits).

$\Sha{254}(\Bit^{[*]}) \rightarrow \Safe$
$\Sha{254}_2({\Byte^{[32]}}^{[2]}) \rightarrow \Safe$
The $\Sha{254}$ hash functions are identical to $\Sha{256}$ except that the last two bits of the $\Sha{256}$ 256-bit digest are zeroed out:

$\bi \Sha{254}(x) \equiv \Sha{256}(x)[\ldotdot 31] \concat (\Sha{256}(x)[31] \AND 1^{[6]})$

The hash function $\Sha{254}$ operates on preimages of unsized bit arrays while $\Sha{254}_2$ operates on preimages of two 32-byte values.

$\Poseidon(\Bit^{[*]}) \rightarrow \Fq$
$\Poseidon_2(\Fq^{[2]}) \rightarrow \Fq$
$\Poseidon_8(\Fq^{[8]}) \rightarrow \Fq$
$\Poseidon_{11}(\Fq^{[11]}) \rightarrow \Fq$
The $\Poseidon$ hash functions operate on preimages of an arbitrary number of bits as well as preimages containing 2, 8, and 11 field elements.

Naming Differences

This specification document deviates in one major way from the naming conventions used within the Filecoin proofs implementation. That is what the code calls tree_r_last and comm_r_last, this document calls $\TreeR$ and $\CommR$ respectively. What the code calls comm_r, this document calls $\CommCR$.

In the code, tree_r_last is built over the replica, this is why this document uses the notation $\TreeR$.

In the code, comm_r is not the root of the Merkle tree built over the replica tree_r_last, but is the hash of tree_c.root and tree_r_last.root. This is why this specification document has changed comm_r’s name to $\CommCR$.

tree_r_last $\mapsto \TreeR$
comm_r_last $\mapsto \CommR$
comm_r $\mapsto \CommCR$

Bit Ordering

Little-Endian Bits

A bit array whose least significant bit is at index $0$ and where each subsequent index points to the next most significant bit has little-endian bit order. Little endian bit arrays are denoted by the subscript $_\Le\thin$, for example:

$\bi \bits_\Le: \Bit^{[n]} = \lebinrep{x}$

An unsigned integer $\int$’s little-endian $n$-bit binary representation $\lebinrep{\int}$, where $n = \lceil \log_2(\int) \rceil$, is defined as:

$\bi \lebinrep{\int}: \Bit^{[n]} = [(\int \gg i) \AND 1 \mid \forall i \in [n]]$

An unsigned integer $\int$’s little-endian binary representation can be padded with 0’s at the most significant end to yield an $n'$-bit binary representation, where $n' > n \thin$:

$\bi \lebinrep{\int}: \Bit^{[n']} = \lebinrep{\int} \concat 0^{[n' - n]}$

An unsigned integer $\int$’s little-endian bit string is the reverse of its big-endian bit string:

$\bi \lebinrep{\int} = \reverse(\bebinrep{\int})$
$\bi \reverse(\lebinrep{\int}) = \bebinrep{\int}$
Big-Endian Bits

A bit array whose most significant bit is at index $0$ and where each subsequent index points to the next least significant bit has big-endian bit order. Big-endian bit strings are denotes by the subscript $_\be \thin$, for example:

$\bi \bits_\be: \Bit^{[n]} = \bebinrep{x}$

An unsigned integer $\int$’s $n$-bit big-endian bit string, where $n = \lceil \log_2(\int) \rceil$, is defined as:

$\bi \bebinrep{\int}: \Bit^{[n]} = [(\int \gg (n - i - 1)) \AND 1 \mid \forall i \in [n]]$

An unsigned integer’s big-endian bit string is the reverse of its little-endian bit string:

$\bi \bebinrep{\int} \equiv \reverse(\lebinrep{\int})$
$\bi \reverse(\bebinrep{\int}) = \lebinrep{\int}$
Little-Endian Bytes

The $\lebytes$ bit order signifies that an $n$-bit bit string (where $n$ is evenly divisible by 8) has bit ordering such that: each distinct 8-bit slice of the array is more significant the previous (the first byte is the least significant) and each byte has big-endian bit order (the first bit in each byte is the most significant with respect to that 8-bit slice). Bit strings of $\lebytes$ bit order have length $n = \lceil \log_2(\int) \rceil$ where $n$ is a multiple of 8. Bit strings having $\lebytes$ bit order are denoted using the subscript $_\lebytes \thin$.

An unsigned integer $\int$ represented in binary using $\lebytes$ bit order is defined as:

$\bi \lebytesbinrep{\int} \bi : \Bit^{[n]} = \byte_0 \concat \byte_1 \concat \ldots \concat \byte_{n / 8}$
$\bi \byte_0 \quad\quad\bi\thin\thin\thin : \Bit^{[8]} = [(\LSByte_0, \MSBit), \dots, (\LSByte_0, \LSBit)]$
$\bi \byte_1 \quad\quad\bi\thin\thin\thin : \Bit^{[8]} = [(\LSByte_1, \MSBit), \dots, (\LSByte_1, \LSBit)]$
$\bi \dots$
$\bi \byte_{n / 8} \quad\quad\thin\thin : \Bit^{[8]} = [(\MSByte, \MSBit), \dots, (\MSByte, \LSBit)]$

where the integer’s least significant byte $\LSByte_0$ is the first byte $\byte_0$ in the $\lebytes$ bit string, the integer’s second least significant byte $\LSByte_1$ is the second byte in the bit string $\byte_0$, and so one until the last byte in the bistring is the most significant byte $\MSByte$ with respect to the intger’s value. Each byte in the bit string has big-endian bit order, each bytes most significant bit $\MSBit$ is first and least-significant bit $\LSBit$ is last. $n / 8$ is the number of distinct 8-bit slices of $\int$’s $n$-bit binary representation.

The $\lebytes$ bit order is used to represent a field element $\Fq$ as a 256-bit $\Sha{256}$ input block. Because $\Fq$ and $\Fqsafe$ have bit lengths of 255 and 254 respectively, one or two zero bits are padded onto the most significant end of $\lebinrep{\Fq}$ and $\lebinrep{\Fqsafe}$ to fill the 256-bit SHA block. The padding operation occurs when the bit string has little-endian bit order $\bits_\Le \concat 0^{[\len(\bits_\Le) \thin\MOD\thin 8]}$ before it is converted to $\lebytes$ bit order.

Little-Endian Bits to Little-Endian Bytes

Implementation: storage_proofs::core::util::reverse_bit_numbering()

$\overline{\underline{\Function \lebitstolebytes(\bits_\Le: \Bit^{[m]}) \rightarrow \Bit^{[n]}}}$
$\line{1}{\bi}{\bits_\Le: \Bit^{[n]} = \bits_\Le \concat 0^{[m \thin\MOD\thin 8]}}$
$\line{2}{\bi}{\bits_\lebytes: \Bit^{[n]} = [\ ]}$
$\line{3}{\bi}{\for i \in [n / 8]:}$
$\line{4}{\bi}{\quad \byte_\Le: \Bit^{[8]} = \bits_\Le[i * 8 \thin\ldotdot\thin (i + 1) * 8]}$
$\line{5}{\bi}{\quad \byte_\be: \Bit^{[8]} = \reverse(\byte_\Le)}$
$\line{6}{\bi}{\quad \bits_\lebytes\dot\extend(\byte_\be)}$
$\line{7}{\bi}{\return \bits_\lebytes}$
Little-Endian Bytes to Little-Endian Bits

The length $n$ of $\bits$ must be divisible by 8 (must contain an integer number of bytes).

$\overline{\underline{\Function \lebytestolebits(\bits_\lebytes: \Bit^{[n]}) \rightarrow \Bit^{[n]}}}$
$\line{1}{\bi}{\assert(n \MOD 8 = 0)}$
$\line{2}{\bi}{\bits_\Le: \Bit^{[n]} = [\ ]}$
$\line{3}{\bi}{\for i \in [n / 8]:}$
$\line{4}{\bi}{\quad \byte_\be: \Bit^{[8]} = \bits_\lebytes[i * 8 \thin\ldotdot\thin (i + 1) * 8]}$
$\line{5}{\bi}{\quad \byte_\Le: \Bit^{[8]} = \reverse(\byte_\be)}$
$\line{6}{\bi}{\quad \bits_\Le\dot\extend(\byte_\Le)}$
$\line{7}{\bi}{\return \bits_\Le}$

Filecoin Proofs Terminology

PoRep and PoSt

The Filecoin protocol has three proof variants: PoRep (proof-of-replication), Winning PoSt (proof-of-spacetime), and Window PoSt. The two PoSt variants, Winning and Window, are identical aside from their number of Merkle challenges and number of replica $\TreeR$’s for which the Merkle challenges are made.

A PoRep proof (and PoRep proof batch) are made for a single replica, whereas a PoSt proof (and proof batch) are made using one or more replicas determined by the constants $N_{\replicas / k \thin \aww} \thin$.

Vanilla Proofs v.s. SNARKs

Each proof variant is proven in two ways: vanilla (non-SNARK) and SNARK.

Vanilla proofs are used to instantiate an instance of a proof variant’s circuit. A circuit instance is then used to generate a $\GrothProof$.

Partitions Proofs and Batches

Each proof variant’s vanilla proof is called a partition proof, $\PorepPartitionProof$ and $\PostPartitionProof$. A PoRep or PoSt prover generates multiple partition proofs simultaneously, called a vanilla proof batch. The number of partition proofs per proof variant batch are given by the constants: $N_{\poreppartitions / \batch}$, $N_{\postpartitions / \batch, \P, \winning}$, and $N_{\postpartitions / \batch, \P, \window} \thin$. A SNARK is generated for each vanilla proof in a vanilla proof batch resulting in a batch of corresponding SNARK proofs.

The function $\createporepbatch_R$ shows how a PoRep batch proof can be made for a replica $R$, where $R$ (and its associated trees, commitments, and labeling) were output by the replication process. A similar process is used to produce Winning and Window PoSt batches.

$\overline{\Function \createporepbatch_R(\qquad\qquad\qquad\qquad\quad}$
$\quad \ReplicaID,$
$\quad \TreeD,$
$\quad \TreeC,$
$\quad \TreeR,$
$\quad \CommD,$
$\quad \CommC,$
$\quad \CommR,$
$\quad \CommCR,$
$\quad \Labels,$
$\quad \R_{\porepchallenges, \batch} \thin,$
$\underline{) \rightarrow (\PorepPartitionProof, \GrothProof)^{[N_{\poreppartitions / \batch}]}}$
$\line{1}{\bi}{\batch: (\PorepPartitionProof, \GrothProof)^{[N_{\poreppartitions / \batch}]} = [\ ]}$
$\line{2}{\bi}{\for k \in [N_{\poreppartitions / \batch}]:}$
$\line{3}{\bi}{\quad \PorepPartitionProof_k = \createvanillaporepproof(}$
$\quad\quad\quad k,$
$\quad\quad\quad \ReplicaID,$
$\quad\quad\quad \TreeD,$
$\quad\quad\quad \TreeC,$
$\quad\quad\quad \TreeR,$
$\quad\quad\quad \Labels,$
$\quad\quad\quad \R_{\porepchallenges, \batch} \thin,$
$\quad\quad\thin )$
$\line{4}{\bi}{\quad \RCS_k = \createporepcircuit(}$
$\quad\quad\quad \PorepPartitionProof_k,$
$\quad\quad\quad k,$
$\quad\quad\quad \ReplicaID,$
$\quad\quad\quad \CommD,$
$\quad\quad\quad \CommC,$
$\quad\quad\quad \CommR,$
$\quad\quad\quad \CommCR,$
$\quad\quad\quad \R_{\porepchallenges, \batch} \thin,$
$\quad\quad\thin )$
$\line{5}{\bi}{\quad \GrothProof_k = \creategrothproof(\GrothEvaluationKey_\porep, \RCS_k)}$
$\line{6}{\bi}{\quad \batch\dot\push((\PorepPartitionProof_k, \GrothProof_k))}$
$\line{7}{\bi}{\return \batch}$

BlockSync

Name: Block Sync
Protocol ID: /fil/sync/blk/0.0.1

BlockSync is a simple request/response protocol that allows Filecoin nodes to request ranges of Tipsets from each other, when they have run out of sync, e.g., during downtime. Given that the Filecoin blockchain is extended in Tipsets (i.e., groups of blocks), rather than in blocks, the BlockSync protocol operates in terms of Tipsets.

The request message requests a chain segment of a given length by the hash of its highest Tipset. It is worth noting that this does not necessarily apply to the head (i.e., latest Tipset) of the current chain only, but it can also apply to earlier segments. For example, if the current height is at 5000, but a node is missing Tipsets between 4500-4700, then the Head requested is 4700 and the length is 200.

The Options allow the requester to specify whether they want to receive block headers of the Tipsets only, the transaction messages included in every block, or both.

The response contains the requested chain segment in reverse iteration order. Each item in the Chain array contains either the block headers for that Tipset if the Blocks option bit in the request is set, or the messages across all blocks in that Tipset, if the Messages bit is set, or both, if both option bits are set.

The MsgIncludes array contains one array of integers for each block in the Blocks array. Each of the Blocks arrays in MsgIncludes contains a list of message indexes from the Messages array that are in each Block in the blocks array.

If not all Tipsets requested could be fetched, but the Head of the chain segment requested was successfully fetched, then this is not considered an error, given that the node can continue extending the chain from the Head onwards.

type BlockSyncRequest struct {
    ## The TipSet being synced from
	start [&Block]
    ## How many tipsets to sync
	requestLength UInt
    ## Query options
    options Options
}
type Options enum {
    # Include only blocks
    | Blocks 1
    # Include only messages
    | Messages 2
    # Include messages and blocks
    | BlocksAndMessages 3
}

type BlockSyncResponse struct {
	chain [TipSetBundle]
	status Status
}

type TipSetBundle struct {
  blocks [Blocks]

  blsMsgs [Message]
  blsMsgIncludes [[Uint]]

  secpMsgs [SignedMessage]
  secpMsgIncludes [[UInt]]
}

type Status enum {
    ## All is well.
    | Success 0
    ## We could not fetch all blocks requested (but at least we returned
	## the `Head` requested). Not considered an error.
    | PartialResponse 101
    ## Request.Start not found.
    | BlockNotFound 201
    ## Requester is making too many requests.
    | GoAway 202
    ## Internal error occured.
    | InternalError 203
    ## Request was bad
    | BadRequest 204
}

Example

For the set of arrays in the following TipSetBundle, the corresponding messages per block are as shown below:

TipSetBundle

Blocks: [b0, b1]
secpMsgs: [mA, mB, mC, mD]
secpMsgIncludes: [
  [0, 1, 3],
  [1, 2, 0],
]

Messages corresponding to each Block

Block 'b0': [mA, mB, mD]
Block 'b1': [mB, mC, mA]

In other words, the first element of the secpMsgIncludes array: [0, 1, 3] points to the 1st, 2nd and 4th element of the secpMsgs array: mA, mB, mD, which correspond to the 1st element of the Blocks array b0. Hence, Block 'b0': [mA, mB, mD].

Similarly, the second element of the secpMsgIncludes array: [1, 2, 0] points to the 2nd, 3rd and 1st element of the secpMsgs array: mB, mC, mA, which correspond to the 2nd element of the Blocks array b1. Hence, Block 'b1': [mB, mC, mA].

GossipSub

Messages and block headers alongside the message references are propagated using the libp2p GossipSub router. In order to guarantee interoperability between different implementations, all filecoin full nodes must implement and use this protocol. All pubsub messages are authenticated and must be syntactically validated before being propagated further.

GossipSub is a gossip-based pubsub protocol that is utilising two types of links to propagate messages: i) mesh links that carry full messages in an eager-push (i.e., proactive send) manner and ii) gossip-links that carry message identifiers only and realise a lazy-pull (i.e., reactive request) propagation model. Mesh links form a global mesh-connected structure, where, once messages are received they are forwarded in full to mesh-connected nodes, realizing an “eager-push” model. Instead, gossip-links are utilized periodically to complement the mesh structure. During gossip propagation, only message headers are sent to a selected group of nodes in order to inform them of messages that they might not have received before. In this case, nodes ask for the full message, hence, realizing a reactive request, or “lazy pull” model.

GossipSub includes a number of security extensions and mitigation strategies that make the protocol robust against attacks. Please refer to the protocol’s specification for details on GossipSub’s design, implementation and parameter settings, or to the technical report for the design rationale and a more detailed evaluation of the protocol.

Cryptographic Primitives

  • Merkle tree/DAG

  • Vector commitment scheme

  • zkSNARK

  • Reliable broadcast channel (libp2p)

  • TODO: Add more detail and include references to relevant papers.

Signatures

Signatures are cryptographic functions that attest to the origin of a particular message. In the context of Filecoin, signatures are used to send and receive messages with the assurance that each message was generated by a specific entity. In other words, it is infeasible for an entity i to generate a signed message that appears to have been generated by j, with j != i.

Filecoin uses signatures to associate an action to a given party. For example, Filecoin uses signatures in order to validate deal messages which represent an action like a storage deal. Filecoin uses signatures to verify the authenticity of the following objects (non exhaustive list):

  • Messages: Users authenticate their messages to the blockchain.
  • Tickets: Miner authenticates its ticket (see Storage Miner).
  • Blocks: Block leader signs over all data in the block.

Messages

To generate a signature for the Message type, compute the signature over the message’s CID (taken as a byte array).

Note: for each specific use of a signature scheme, it is recommended to use a domain separation tag to treat the hash function as an independent random oracle. These tags are indicated in the relevant places throughout the specs. Read more about this in Randomness.

Signature Types

Filecoin currently uses two types of signatures:

  • ECDSA signatures over the Secp256k1 elliptic curve to authenticate user messages, mainly for compatibility with external blockchain systems.
  • BLS signatures over the BLS12-381 group of curves

Both signature types fulfill the Signature interface and each type have additional functionality as explained below.

type Message Bytes
type SecretKey Bytes
type PublicKey Bytes
type SignatureBytes Bytes

type SigKeyPair struct {
    PublicKey
    SecretKey
}

type Signature struct {
    Type  SigType         @(internal)
    Sig   SignatureBytes  @(internal)
}

type SigType enum {
    ECDSASigType
    BLSSigType
}
ECDSA Signatures

Filecoin uses the ECDSA signature algorithm over the secp256k1 curve to authenticate the blockchain messages. The main reason is to be able to validate messages from other blockchain systems that uses secp256k1 (such as Bitcoin or exchanges in general). ECDSA signatures offer an additional useful functionality as well: to recover the public key from a given signature. This feature can allow space to be saved on the blockchain by extracting the public key locally from the signature rather than specifying an ID of the public key.

// ECDSA implements the Signature interface using the ECDSA algorithm with
// the Secp256k1 elliptic curve.
type ECDSA struct {
    // The Signature object is the one returned from SigKeyPair.Sign(). It can
    // be casted to ECDSA to get the additional functionality described below.
    Signature

    // Recover recovers a public key associated with a particular signature.
    //
    // Out:
    //    pk - the public key associated with `M` who signed `m`
    //    err - a standard error message indicating any process issues
    //    **
    // In:
    //    m - a series of bytes representing the signed message
    //    sig - a series of bytes representing a signature usually `r`|`s`
    //
    Recover(m Message, sig SignatureBytes) struct {pk PublicKey, err error}
}

Wire Format: Filecoin uses the standard secp256k1 signature serialization, as described below. For more details on how the Filecoin Signature type is serialized, see Signature.

SignatureBytes = [0x30][len][0x02][r][indicator][s][indicator][recovery]

s = Scalar of size 32 bytes

r = Compressed elliptic curve point (x-coordinate) of size 32 bytes

recovery = Information needed to recover a public key from sig.

  • LSB(0) = parity of y-coordinate of r
  • LSB(1) = overflow indicator

indicator = a 2 byte formatting indicator

External References: Elliptic Curve Cryptography Paper

BLS Signatures

Filecoin uses the BLS signature scheme over the BLS12-381 group of elliptic curves. You can find the default Rust implementation in Filecoin’s repo.

// BLS implements the Signature interface using the BLS signature scheme
// with the BLS12-381 group of elliptic curves.
type BLS struct {
    // This signature is the one returned from SigKeyPair.Sign(). It can be
    // casted to a BLS signature struct to get the additional functionalities.
    Signature

    // This represents the largest potential value for a BLS signature in Bytes
    MaxSigValue() Bytes

    // Aggregates this BLS signature and `sig` into one BLS signature that can
    // be verified against the aggregation of the two public key that signed
    // the aggregated signatures.
    Aggregate(sig2 SignatureBytes) SignatureBytes

    // VerifyAggregate verifies the aggregate signature with the aggregate
    // public key over all the distinct messages given. Note that if all
    // messages are the same, it is sufficient and correct to only call
    // `Verify` since it is a subset of `VerifyAggregate`.
    VerifyAggregate(messages [Message], aggPk BLSPublicKey, aggSig SignatureBytes) bool
}

// BLSPublicKey is a PublicKey with an addition method to aggregate public keys
// together.
type BLSPublicKey struct {
    PublicKey

    // Aggregate this public key with p2 into one public key. This aggregated
    // public key can 
    // - verify aggregated signatures signed by the two BLSPublicKey
    // - be aggregated further down with other (aggregated or not) BLSPublicKey.
    Aggregate(p2 BLSPublicKey) BLSPublicKey
}
package crypto

import util "github.com/filecoin-project/specs/util"

func (self *BLS_I) Verify(input util.Bytes, pk PublicKey, sig util.Bytes) bool {
	// blsPk := pk.(*BLSPublicKey)
	// 1. Verify public key according to string_to_curve section 2.6.2.1. in
	// 	https://tools.ietf.org/html/draft-boneh-bls-signature-00#page-12
	// 2. Verify signature according to section 2.3
	// 	https://tools.ietf.org/html/draft-boneh-bls-signature-00#page-8
	panic("bls.Verify TODO")
	return false
}

func (self *BLS_I) MaxSigValue() util.Bytes {
	panic("TODO")
}

func (self *BLS_I) Sign(input util.Bytes, sk *SecretKey) bool {
	panic("see 2.3 in https://tools.ietf.org/html/draft-boneh-bls-signature-00#page-8")
	return false
}

func (self *BLS_I) Aggregate(sig2 util.Bytes) util.Bytes {
	panic("see 2.5 in https://tools.ietf.org/html/draft-boneh-bls-signature-00#page-8")
	var ret util.Bytes
	return ret
}

func (self *BLS_I) VerifyAggregate(messages []util.Bytes, aggPk PublicKey, aggSig util.Bytes) bool {
	panic("see 2.5.2 in https://tools.ietf.org/html/draft-boneh-bls-signature-00#page-9")
	return false
}

Choice of group: The BLS signature requires the use of a pairing-equipped curve which generally yield three groups: G_1, G_2 and G_T. In the BLS signature scheme, there is a choice on which group to define the public key and the signature:

  • Public key is on G_1 and signature on G_2
  • Public key is on G_2 and signature on G_1

The group G_1 is “smaller” and hence offer faster arithmetic operations and smaller byte representation of its elements. Filecoin currently uses the group G_1 for representing public keys and the group G_2 for representing signatures.

Wire Format: Filecoin uses the standard way to serialize BLS signatures as explained in the RFC Section 2.6.1.

Rationale: BLS signatures have two main characteristics that are making them ideal candidates in recent blockchain systems:

  • BLS signatures are deterministic: for a given message and a given secret key, the signature is always the same. That feature removes an important security weakness of most randomized signature schemes: signer must never re-use the same randomness twice otherwise this reveals its private key. As well, deterministic signatures are an ideal candidate to reduce the attack surface in terms of grinding, which is a real concern in recent proof of stake systems.
  • BLS signatures are aggregatable: one can aggregate signatures from different signers into one single signature. This feature enables drastically saving space on the blockchain, especially when aggregating user messages.

Aggregation Functionality: The aggregation functionality is commutative and associative, enabling partial aggregation. For example, given (PK1, sig1), (PK2, sig2), (PK3, sig3), one can first aggregate (PK12 = PK1 + PK2, sig12 = sig1 + sig2) then aggregate with the third tuple to produce (PK123 = PK12 + PK3, sig123 = sig12 + sig3).

Aggregation Security: The naive BLS signature aggregation scheme is vulnerable to rogue-key attacks where the attacker can freely choose its public key. To prevent against this class of attacks there exists three different kind of measures, as explained here:

  • Enforce distinct messages
  • Prove knowledge of the secret key
  • Use a modified scheme (such as BLS Multi Sig)

Fortunately, Filecoin can enforce the first condition to safely use the aggregation property: Filecoin uses aggregation only for aggregating message signatures within a single block. Since Filecoin uses the account model to represent the state of the chain, each message for a given signer is used in combination with a nonce to avoid replay attacks. As a direct consequence, every message is unique thereby the aggregation is done on distinct messages. Obviously, the assumption here is that the block producer enforces that distinction and the other miners will check all messages to make sure they are valid.

Verifiable Random Function

Filecoin uses the notion of a Verifiable Random Function (VRF). A VRF uses a private key to produce a digest of an arbitrary message such that the output is unique per signer and per message. Any third party in possession of the corresponding public key, the message, and the VRF output, can verify if the digest has been computed correctly and by the correct signer. Using a VRF in the ticket generation process allows anyone to verify if a block comes from an eligible block producer (see Ticket Generation for more details).

BLS signature can be used as the basis to construct a VRF. Filecoin transforms the BLS signature scheme it uses (see Signatures into a VRF, Filecoin uses the random oracle model and deterministically hashes the signature (using blake2b to produce a 256 bit output) to produce the final digest.

These digests are often used as entropy for randomness in the protocol (see Randomness).

type VRFPublicKey PublicKey
type VRFSecretKey SecretKey

// VRFKeyPair holds the private key and respectively the public key to create 
// and respectively verify a VRF output.
type VRFKeyPair struct {
    VRFPublicKey
    VRFSecretKey

    // Generate a VRF from the given input with the SecretKey that can be
    // verified with the PublicKey
    Generate(input Bytes) VRFResult
}

type VRFResult struct {
    Output            Bytes  // @(internal)

    Proof             Bytes
    Digest            Bytes
    MaxValue()        Bytes
    ValidateSyntax()  bool

    Verify(input Bytes, pk VRFPublicKey) bool
}
package crypto

import (
	util "github.com/filecoin-project/specs/util"
	"golang.org/x/crypto/blake2b"
)

func (self *VRFResult_I) ValidateSyntax() bool {
	panic("TODO")
	return false
}

func (self *VRFResult_I) Verify(input util.Bytes, pk VRFPublicKey) bool {
	// return new(BLS).Verify(self.Proof, pk.(*BLSPublicKey), input)
	return false
}

func (self *VRFResult_I) MaxValue() util.Bytes {
	panic("")
	// return new(BLS).MaxSigValue()
}

func (self *VRFKeyPair_I) Generate(input util.Bytes) VRFResult {
	// sig := new(BLS).Sign(input, self.SecretKey)
	var blsSig util.Bytes

	digest := blake2b.Sum256(blsSig)
	ret := &VRFResult_I{
		Proof_:  blsSig,
		Digest_: digest[:],
	}
	return ret
}

Randomness

Randomness is used throughout the protocol in order to generate values and extend the blockchain. Random values are drawn from a drand beacon and appropriately formatted for usage. We describe this formatting below.

Encoding Random Beacon randomness for on-chain use

Entropy from the drand beacon can be harvested into a more general data structure: a BeaconEntry, defined as follows:

type BeaconEntry struct {
    // Drand Round for the given randomness
    Round       uint64
    // Drand Signature for the given Randomness, named Data as a more general name for random beacon output
    Data   []byte
}

The BeaconEntry is then combined with other values to generate necessary randomness that can be specific to (eg) a given miner address or epoch. To be used as part of entropy, these values are combined in objects that can then be CBOR-serialized according to their algebraic datatypes.

Domain Separation Tags

Further, we define Domain Separation Tags with which we prepend random inputs when creating entropy.

All randomness used in the protocol must be generated in conjunction with a unique DST, as well as certain Signatures and Verifiable Random Function usage.

Forming Randomness Seeds

The beacon entry is combined with a few elements for use as part of the protocol as follows:

  • a DST (domain separation tag)
    • Different uses of randomness are distinguished by this type of personalization which ensures that randomness used for different purposes will not conflict with randomness used elsewhere in the protocol
  • the epoch number, ensuring
    • liveness for leader election – in the case no one is elected in a round and no new beacon entry has appeared (i.e. if the beacon frequency is slower than that of block production in Filecoin), the new epoch number will output new randomness for LE (note that Filecoin uses liveness during a beacon outage).
    • other entropy, ensuring that randomness is modified as needed by other context-dependent entropy (e.g. a miner address if we want the randomness to be different for each miner).

While all elements are not needed for every use of entropy (e.g. the inclusion of the round number is not necessary prior to genesis or outside of leader election, other entropy is only used sometimes, etc), we draw randomness as follows for the sake of uniformity/simplicity in the overall protocol.

In all cases, a drand signature is used as the base of randomness: it is hashed using blake2b in order to obtain a usable randomness seed. In order to make randomness seed creation uniform, the protocol derives all such seeds in the same way, using blake2b as a hash function to generate a 256-bit output as follows:

In round n, for a given randomness lookback l, and serialized entropy s:

GetRandomness(dst, l, s):
    ticketDigest = beacon.GetRandomnessFromBeacon(n-l)

    buffer = Bytes{}
    buffer.append(IntToBigEndianBytes(dst))
    buffer.append(randSeed)
    buffer.append(n-l) // the sought epoch
    buffer.append(s)

    return H(buffer)

Drawing tickets from the VRF-chain for proof inclusion

In some places, the protocol needs randomness drawn from the Filecoin blockchain’s VRF-chain (which generates tickets with each new block) rather than from the random beacon, in order to tie certain proofs to a particular set of Filecoin blocks (i.e. a given chain or fork). In particular, SealRandomness must be taken from the VRF chain, in order to ensure that no other fork can replay the Seal (see sealing for more).

A ticket is drawn from the chain for randomness as follows, for a given epoch n, and ticket sought at epoch e:

GetRandomnessFromVRFChain(e):
    While ticket is not set:
        Set wantedTipsetHeight = e
        if wantedTipsetHeight <= genesis:
            Set ticket = genesis ticket
        else if blocks were mined at wantedTipsetHeight:
            ReferenceTipset = TipsetAtHeight(wantedTipsetHeight)
            Set ticket = minTicket in ReferenceTipset
        If no blocks were mined at wantedTipsetHeight:
            wantedTipsetHeight--
            (Repeat)
    return ticket.Digest()

In plain language, this means:

  • Choose the smallest ticket in the Tipset if it contains multiple blocks.
  • When sampling a ticket from an epoch with no blocks, draw the min ticket from the prior epoch with blocks

This ticket is then combined with a Domain Separation Tag, the round number sought and appropriate entropy to form randomness for various uses in the protocol.

Entropy to be used with randomness

As stated above, different uses of randomness may require added entropy. The CBOR-serialization of the inputs to this entropy must be used.

For instance, if using entropy from an object of type foo, its CBOR-serialization would be appended to the randomness in GetRandomness(). If using both an object of type foo and one of type bar for entropy, you may define an object of type baz (as below) which includes all needed entropy, and include its CBOR-serialization in GetRandomness().

type baz struct {
    firstObject     foo
    secondObject    bar
}

Currently, we distinguish the following entropy needs in the Filecoin protocol (this list is not exhaustive):

  • TicketProduction: requires MinerIDAddress
  • ElectionProofProduction: requires current epoch and MinerIDAddress – epoch is already mixed in from ticket drawing so in practice is the same as just adding MinerIDAddress as entropy
  • WinningPoStChallengeSeed: requires MinerIDAddress
  • WindowedPoStChallengeSeed: requires MinerIDAddress
  • WindowedPoStDeadlineAssignment: TODO @jake
  • SealRandomness: requires MinerIDAddress
  • InteractiveSealChallengeSeed: requires MinerIDAddress

The above uses of the MinerIDAddress ensures that drawn randomness is distinct for every miner drawing this (regardless of whether they share worker keys or not, eg – in the case of randomness that is then signed as part of leader election or ticket production).

$$ \gdef\Zp{{\mathbb{Z}_p}} \gdef\Zbin{{\mathbb{Z}_{2^m}}} \gdef\thin{{\thinspace}} \gdef\Byte{{\mathbb{B}}} \gdef\Bit{{\{0, 1\}}} \gdef\typecolon{\mathbin{\large :}} \gdef\neg{{\text{-}}} \gdef\constb{{\textbf{const }}} \gdef\as{{\textbf{ as }}} \gdef\msb{{\textsf{msb}}} \gdef\if{{\text{if }}} \gdef\ifb{{\textbf{if } \hspace{1pt}}} \gdef\bi{{\ \ }} \gdef\FieldBits{{\text{FieldBits}}} \gdef\SboxBits{{\text{SboxBits}}} \gdef\RoundConstants{{\text{RoundConstants}}} \gdef\hso{{\hspace{1pt}}} \gdef\Function{{\textbf{Function: }}} \gdef\init{{\textsf{init}}} \gdef\for{{\textbf{for }}} \gdef\foreach{{\textbf{for each }}} \gdef\bcolon{{\hspace{1pt} \textbf{:}}} \gdef\state{{\textsf{state}}} \gdef\xor{\oplus_\text{xor}} \gdef\bit{{\textsf{bit}}} \gdef\bits{{\textsf{bits}}} \gdef\line#1{{{\small \rm \rlap{#1.}\hphantom{10.}} \ \ }} \gdef\while{{\textbf{while }}} \gdef\State{{\text{State}}} \gdef\pluseq{\mathrel{+}=} \gdef\return{{\textbf{return }}} \gdef\else{{\textbf{else}}} \gdef\len{{\textbf{len}}} \gdef\preimage{{\textsf{preimage}}} \gdef\DomainTag{{\text{DomainTag}}} \gdef\Arity{{\text{Arity}}} \gdef\HashType{{\text{HashType}}} \gdef\MerkleTree{{\text{MerkleTree}}} \gdef\ConstInputLen{{\text{ConstInputLen}}} \gdef\Mds{{\mathcal{M}}} \gdef\la{{\langle}} \gdef\ra{{\rangle}} \gdef\xb{{\textbf{x}}} \gdef\yb{{\textbf{y}}} \gdef\dotdot{{{\ldotp}{\ldotp}}} \gdef\do{{\textbf{do }}} \gdef\timesb{{\textbf{ times}}} \gdef\RC{{\text{RoundConstants}}} \gdef\Padding{{\text{Padding}}} \gdef\push{{\textbf{.push}}} \gdef\dotprod{{{\boldsymbol\cdot} \hso}} \gdef\row{{\textsf{row}}} \gdef\acc{{\textsf{acc}}} \gdef\push{{\textbf{.push}}} \gdef\extend{{\textbf{.extend}}} \gdef\reverse{{\textbf{reverse}}} \gdef\MdsInv{{\mathcal{M}^{\text{-} 1}}} \gdef\Padding{{\text{Padding}}} \gdef\Pre{{\mathcal{P}}} \gdef\Sparse{{\mathcal{S}}} \gdef\wb{{\textbf{w}}} \gdef\AtimesB{{(A \times B)}} $$

Poseidon

General Notation

$x \typecolon \mathbb{T}$
A variable $x$ having type $\mathbb{T}$.

$v \typecolon \mathbb{T}^{[n]}$
An array of $n$ elements each of type $\mathbb{T}$.

$A \typecolon \mathbb{T}^{[n \times n]}$
An $n {\times} n$ matrix having elements of type $\mathbb{T}$.

$\mathcal{I}_n \typecolon \mathbb{T}^{[n \times n]}$
The $n {\times} n$ identity matrix.

$v \typecolon {\mathbb{T}^{[n \times n]}}^{[m]}$
An array of $m$ matrices.

$b \typecolon \Bit$
A bit $b$.

$\bits \typecolon \Bit^{[n]}$
An array $\bits$ of $n$ bits.

$x \typecolon \Zp$
A prime field element $x$.

$x \in \mathbb{Z}_{n}$
An integer in $x \in [0, n)$.

$x \typecolon \mathbb{Z}_{\geq 0}$
A non-negative integer.

$x \typecolon \mathbb{Z}_{>0}$
A positive integer.

$[n]$
The range of integers $0, \dots, n - 1 \hso$. Note that $[n] = [0, n)$.

$[a, b)$
The range of integers $a, \dots, b - 1 \hso$.

Array Operations

Note: all arrays and matrices are indexed starting at zero. An array of $n$ elements has indices $0, \dots, n - 1$.

$v[i]$
Returns the $i^{th}$ element of array $v$. When the notation $v[i]$ is cumbersome $v_i$ is used instead.

$v[i \dotdot j]$
Returns a slice of the array $v$: $[v[i], \dots, v[j {-} 1]]$.

$v \parallel w$
Concatenates two arrays $v$ and $w$, of types $\mathbb{T}^{[m]}$ and $\mathbb{T}^{[n]}$ respectively, producing an array of type $\mathbb{T}^{[m + n]} \hso$. The above is equivalent to writing $[v_0, \dots, v_{m - 1}, w_0, \dots, w_{n - 1}]$.

$[f(\dots)]_{i \in \lbrace 1, 2, 3 \rbrace}$
Creates an array using list comprehension; each element of the array is the output of the expression $f(\dots)$ at each element of the input sequence (e.g. $i \in \lbrace 1, 2, 3 \rbrace$).

$v \mathbin{\vec\oplus} w$
Element-wise field addition of two equally lengthed vectors of field elements $v, w \typecolon \Zp^{[n]}$. The above is equivalent to writing $[v_0 \oplus w_0, \dots, v_{n - 1} \oplus w_{n - 1}]$.

$\reverse(v)$
Reverses the elements of a vector $v$, i.e. the first element of $\reverse(v)$ is the last element of $v$.

Matrix Operations

$A_{i, j}$
Returns the value of matrix $A$ at row $i$ column $j$.

$A_{i, \ast}$
Returns the $i^{th}$ row of matrix $A$.

$A_{\ast, j}$
Returns the $j^{th}$ column of matrix $A$.

$A_{1 \dotdot, 1 \dotdot} = \begin{bmatrix} A_{1, 1} & \dots & A_{1, c - 1} \\ \vdots & \ddots & \vdots \\ A_{r - 1, 1} & \dots & A_{r - 1, c - 1} \end{bmatrix}$
Returns a submatrix of $m$ which excludes $m$’s first row and first column (here $m$ is an $r {\times} c$ matrix).

$A \times B$
Matrix-matrix multiplication of two matrices of field elements $A \typecolon \Zp^{[m \times n]}$ and $B \typecolon \Zp^{[n \times k]}$ which produces a matrix of type $\Zp^{[m \times k]}$. Note that $\AtimesB_{i, j} = A_{i, \ast} \boldsymbol\cdot B_{\ast, j}$ where the dot product uses field multiplication.

$A^{\neg 1}$
The inverse of a square $n {\times} n$ matrix, i.e. returns the matrix such that $A \times A^{\neg 1} = \mathcal{I}_n$.

$\textbf{v} \times A = [\textbf{v}_0, \dots, \textbf{v}_{m - 1}] \begin{bmatrix} A_{0, 0} & \dots & A_{0, n - 1} \\ \vdots & \ddots & \vdots \\ A_{m - 1, 0} & \dots & A_{m - 1, n - 1} \end{bmatrix} = [\textbf{v} \boldsymbol\cdot A_{\ast, i}]_{i \in [n]}$
Vector-matrix multiplication of a row vector of field elements $\textbf{v} \typecolon \Zp^{[m]}$ with a matrix of field elements $m \typecolon \Zp^{[m \times n]}$, note that $\len(\textbf{v}) = \textbf{rows}(A)$ and $\len(\textbf{v} \times A) = \textbf{columns}(A)$. The product is a row vector of length $n$ (the number of matrix columns). The $i^{th}$ element of the product vector is the dot product of $\textbf{v}$ and the $i^{th}$ column of $A$. Note that dot products use field multiplication.

$A \times \textbf{v} = \begin{bmatrix} A_{0, 0} & \dots & A_{0, n - 1} \\ \vdots & \ddots & \vdots \\ A_{m - 1, 0} & \dots & A_{m - 1, n - 1} \end{bmatrix} \begin{bmatrix} \textbf{v}_0 \\ \vdots \\ \textbf{v}_{n - 1} \end{bmatrix} = \begin{bmatrix} A_{0, \ast} \boldsymbol\cdot \textbf{v} \\ \vdots \\ A_{m - 1, \ast} \boldsymbol\cdot \textbf{v} \end{bmatrix}$
Matrix-vector multiplication of a volumn vector $v \typecolon \mathbb{T}^{[m \times 1]}$ and matrix $A \typecolon \mathbb{T}^{[m \times n]}$, note that $\textbf{rows}(\textbf{v}) = \textbf{columns}(A)$ and $\textbf{rows}(A \times \textbf{v}) = \textbf{rows}(A)$. The product is a column vector whose length is equal to the number of rows of $A$. The $i^{th}$ element of the product vector is the dot product of the $i^{th}$ row of $A$ with $\textbf{v}$. Note that dot products use field multiplication.

Note: $\textbf{v} \times A = (A \times \textbf{v})^T$ when $A$ is symmetric $A = A^T$ (the $i^{th}$ row of $A$ equals the $i^{th}$ column of $A$), i.e. the row vector-matrix product and matrix-column vector product contain the same elements when $A$ is symmetric.

Field Arithmetic

$a \oplus b$
Addition in $\Zp$ of two field elements $a$ and $b$.

$x^\alpha$
Exponentiation in $\Zp$ of a field element $x$ to an integer power $\alpha$.

Bitwise Operations

$\oplus_\text{xor}$
Bitwise XOR.

$\bigoplus_{\text{xor} \ i \in \lbrace 1, 2, 3 \rbrace} \hso i$
XOR’s all values of a sequence. The above is equivalent to writing $1 \oplus_\text{xor} 2 \oplus_\text{xor} 3$.

Bitstrings

$[1, 0, 0] = 100_2$
A bit array can be written as an array $[1, 0, 0]$ or as a bitstring $100_2$. The leftmost bit of the bitstring corresponds to the first bit in the array.

Binary-Integer Conversions

Note: the leftmost digit of an integer $x \typecolon \Zp_{\geq 0}$ is the most significant, thus a right-shift by $n$ bits is defined: $x \gg n = x / 2^n$.

$x \as \Bit_\msb^{[n]}$
Converts an integer $x \typecolon \mathbb{Z}{\geq 0}$ into its $n$-bit binary representation. The most-significant bit ($\msb$) is first (leftmost) in the produced bit array $\Bit^{[n]}$. The above is equivalent to writing $\reverse([(x \gg i) \mathbin\wedge 1]{i \in [\lceil \log_2(x) \rceil]})$. For example, $6 \as \Bit_\msb^{[3]} = [1, 1, 0]$.

$\bits_\msb \as \mathbb{Z}{\geq 0}$
Converts a bit array $\bits \typecolon \Bit^{[n]}$ into a unsigned (non-negative) integer where the first bit in $\bits$ is the most significant ($\msb$). The above is equivalent to writing $\sum
{i \in [n]} 2^i * \reverse(\bits)[i]$.

Poseidon-Specific Symbols

$p \typecolon \mathbb{Z}_{> 0}$
The prime field modulus.

$M \in \lbrace 80, 128, 256 \rbrace$
The security level measured in bits. Poseidon allows for 80, 128, and 256-bit security levels.

$t \typecolon \mathbb{Z}_{> 0} = \len(\preimage) + \len(\text{digest}) = \len(\preimage) + \left\lceil {2M \over \log_2(p)} \right\rceil$
The width of a Poseidon instance; the length in field elements of an instance’s internal $\state$ array. The width $t$ is equal to the preimage length plus the output length, where output length is equal to the number of field elements $\left\lceil {2M \over \log_2(p)} \right\rceil$ required to achieve the targeted security level $M$ in a field of size $\log_2(p)$. Stated another way, each field element in a Poseidon digest provides and additional $\log_2(p) \over 2$ bits of security.

$(p, M, t)$
A Poseidon instance. Each instance is fully specified using this parameter triple.

$\alpha \in \lbrace \neg 1, 3, 5 \rbrace$
The S-box function’s exponent $S(x) = x^\alpha$, where $\gcd(\alpha, p - 1) = 1$. Poseidon allows for exponents -1, 3, and 5.

$R_F \typecolon \mathbb{Z}_{> 0}$
The number of full rounds. $R_F$ is even.

$R_P \typecolon \mathbb{Z}_{> 0}$
The number of partial rounds.

$R = R_F + R_P$
The total number of rounds

$R_f = R_F / 2$
Half the number of full rounds.

$r \in [R]$
The index of a round.

$r \in [R_f]$
The round index for a first-half full round.

$r \in [R_f + R_P, R)$
The round index for a second-half full round.

$r \in [R_f, R_f + R_P)$
The round index for a partial round.

$\state \typecolon \Zp^{[t]}$
A Poseidon instance’s internal state array of $t$ field elements which are transformed in each round.

$\RC \typecolon \Zp^{[Rt]}$
The round constants for an unoptimized Poseidon instance.

$\RC_r \typecolon \Zp^{[t]}$
The round constants for round $r \in [R]$ for an unoptimized Poseidon instance, that are added to $\state$ before round $r$’s S-boxes.

$\RC' \typecolon \Zp^{[tR_F + R_P]} = \\ \quad \RC_\text{pre}' \\ \quad \parallel \ \RC_1' \\ \quad \parallel \ \ldots \\ \quad \parallel \ \RC_{R - 2}'$
The round constants for an optimized Poseidon instance. There are no constants associated with the last full round $r = R - 1$.

$\RC’_\text{pre} \typecolon \Zp^{[t]}$
The round constants that are added to Poseidon’s $\state$ array before the S-boxes in the first round $r = 0$ of an optimized Poseidon instance.

$\RC'_r \typecolon \begin{cases} \Zp^{[1]} & \if r \in [R_f, R_f + R_P), \text{i.e. } r \text{ is a partial round } \cr \Zp^{[t]} & \if r \in [R_f] \text{ or } r \in [R_f + R_P, R - 1), \text{i.e. } r \text{ is a full round, excluding the last round} \end{cases}$
The round constants that are added to Poseidon’s $\state$ array after the S-boxes in round $r$ in an optimized Poseidon instance. Partial rounds have a single round constant, full rounds (excluding the last) have $t$ constants. The last full round has no round constants.

$\Mds \typecolon \Zp^{[t \times t]}$
The MDS matrix for a Poseidon instance.

$\Pre \typecolon \Zp^{[t \times t]}$
The pre-sparse matrix used in MDS mixing for the last first-half full round ($r = R_f - 1$) of an optimized Poseidon instance.

$\Sparse \typecolon {\Zp^{[t \times t]}}^{[R_P]}$
An array of sparse matrices used in MDS mixing for the partial rounds $r \in [R_f, R_f + R_P)$ of the optimized Poseidon algorithm.

Poseidon Instantiation

The parameter triple $(p, M, t)$ fully specifies a unique instance of Poseidon (a hash function that uses the same constants and parameters and performs the same operations). All other Poseidon parameters and constants are derived from the instantiation parameters.

The S-box exponent $\alpha$ is derived from the field modulus $p$ such that $a \in \lbrace 3, 5 \rbrace$ and $\gcd(\alpha, p - 1) = 1$.

The round numbers $R_F$ and $R_P$ are derived from the field size and security level $(\lceil \log_2(p) \rceil, M)$.

The $\RC$ are derived from $(p, M, t)$.

The MDS matrix $\Mds$ is derived from the width $t$.

The allowed preimage sizes are $\len(\preimage) \in [1, t)$.

The total number of operations executed per hash is determined by the width and number of rounds $(t, R_F, R_P)$.

Filecoin’s Poseidon Instances

Note: the following are the Poseidon instantiation parameters used within Filecoin and do not represent all possible Poseidon instances.

$p = 52435875175126190479447740508185965837690552500527637822603658699938581184513$
$\hphantom{p} = \text{0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001}$
The Poseidon prime field modulus in base-10 and base-16. Filecoin uses BLS12-381’s scalar field as the Poseidon prime field $\Zp$, i.e. $p$ is the order of BLS12-381’s prime order subgroup $\mathbb{G}_1$. The bit-length of $p$ is $\lceil \log_2(p) \rceil = 255 \approx 256$ bits.

$M = 128 \ \text{Bits}$
Filecoin targets the 128-bit security level.

$t \in \lbrace 3, 5, 9, 12 \rbrace = \lbrace \text{arity} + 1 \mid \text{arity} \in \lbrace 2, 4, 8, 11 \rbrace \rbrace$
The size in field elements of Poseidon’s internal state; equal to the preimage length (a Filecoin Merkle tree arity) plus 1 for the digest length (the number of field elements required to target the $M = 128$-bit security level via a 256-bit prime $p$). Filecoin’s Poseidon instances take preimages of varying lengths (2, 4, 8, and 11 field elements) and always return one field element.

  • $t = 3$ is used to hash 2:1 Merkle trees (BinTrees) and to derive SDR-PoRep’s $\text{CommR}$
  • $t = 5$ is used to hash 4:1 Merkle trees (QuadTrees)
  • $t = 9$ is used to hash 8:1 Merkle trees (OctTrees)
  • $t = 12$ is used to hash SDR-PoRep columns of 11 field elements.

$\alpha = 5$
The S-box function’s $S(x) = x^\alpha$ exponent. It is required that $\alpha$ is relatively prime to $p$, which is true for Filecoin’s choice of $p$.

Round Numbers

The Poseidon round numbers are the number of full and partial rounds $(R_F, R_P)$ for a Poseidon instance $(p, M, t)$. The round numbers are chosen such that they minimize the total number of S-boxes:

$$ \text{Number of S-boxes} = tR_F + R_P $$

while providing security against known attacks (statistical, interpolation, and Gröbner basis).


$\constb R_F, R_P = \texttt{calc\_round\_numbers}(p, M, t, c_{\alpha}})$
where the S-box case, $c_{\alpha}$, is given by $c_{\alpha} = \begin{cases} 0 & \if \alpha = 3 \cr 1 & \if \alpha = 5 \cr 2 & \if \alpha = \neg 1 \end{cases}
The number of full and partial rounds, both are positive integers $R_F, R_P \typecolon \mathbb{Z}_{>0}$ and $R_F$ is even.

$R_F$ and $R_P$ are calculated using either the Python script calc_round_numbers.py or the neptune Rust library, denoted $\texttt{calc\_round\_numbers}$. Both methods calculate the round numbers via brute-force; by iterating over all reasonable values for $R_F$ and $R_P$ and choosing the pair that satisfies the security inequalities (provided below) while minimizing the number of S-boxes.

Security Inequalities

The round numbers $R_F$ and $R_P$ are chosen such that they satisfy the following inequalities. The symbol $\Longrightarrow$ is used to indicate that an inequality simplifies when Filecoin’s Poseidon parameters $M = 128$ and $\alpha = 5$ are plugged in.

$(1) \quad 2^M \leq p^t$
Equivalent to writing $M \leq t\log_2(p)$ (Appendix C.1.1 in the Poseidon paper). This is always satisfied for Filecoin’s choice of $p$ and $M$.

$(2) \quad R_f \geq 6$
The minimum $R_F$ necessary to prevent statistical attacks (Eq. 2 Section 5.5.1 in the Poseidon paper where $\lfloor \log_2(p) \rfloor - 2 = 252$ and $\mathcal{C} = 2$ for $\alpha = 5$).

$(3) \quad R > \lceil M \log_\alpha(2) \rceil + \lceil \log_\alpha(t) \rceil \ \ \Longrightarrow \ \ R > \begin{cases} 57 & \if t \in [2, 5] \cr 58 & \if t \in [6, 25] \end{cases}$
The minimum number of total rounds necessary to prevent interpolation attacks (Eq. 3 Section 5.5.2 of the Poseidon paper).

$(4 \text{a}) \quad R > {M \log_\alpha(2) \over 3} \ \ \Longrightarrow \ \ R > 18.3$
$(4 \text{b}) \quad R > t - 1 + {M \log_\alpha(2) \over t + 1}$
The minimum number of total rounds required to prevent against Gaussian elimination attacks (both equations must be satisfied, Eq. 5 from Section 5.5.2 of the Poseidon paper).

Round Constants

Note: this section gives the round constants for only the unoptimized Poseidon algorithm.

Round Constants

$\constb \RC \typecolon \Zp^{[Rt]}$
For each Poseidon instance $(p, M, t)$ an array $\RC$ containing $Rt$ field elements is generated ($t$ field elements per round $r \in [R]$) using the Grain-LFSR stream cipher whose 80-bit state is initialized to $\text{GrainState}_\init$, an encoding of the Poseidon instance.

$\constb \RC_r \typecolon \Zp^{[t]} = \RC[rt \dotdot (r + 1)t]$
The round constants for round $r \in [R]$ for an unoptimized Poseidon instance.

$\constb \FieldBits \typecolon \Bit^{[2]}_\msb = \begin{cases} 0 & \text{if using a binary field } \Zbin \\ 1 & \text{if using a prime field } \Zp \end{cases} = 01_2$
Specifies the field type as prime or binary. Filecoin always uses a prime field $\Zp$, however Poseidon also can be instantiated using a binary field $\Zbin$.

$\constb \SboxBits \typecolon \Bit^{[4]}_\msb = \begin{cases} 0 & \if \alpha = 3 \cr 1 & \if \alpha = 5 \cr 2 & \if \alpha = \neg 1 \end{cases} = 0001_2$
Specifies the S-box exponent $\alpha$. Filecoin uses $\alpha = 5$.

$\constb \text{FieldSizeBits} \typecolon \Bit^{[12]}_\msb = \lceil \log_2(p) \rceil = 255 = 000011111111_2$
The bit-length of the field modulus.

$\constb \text{GrainState}_\text{init} \typecolon \Bit^{[80]} = \\ \quad \text{FieldBits} \\ \quad \Vert \ \SboxBits \\ \quad \Vert \ \text{FieldSizeBits} \\ \quad \Vert \ t \as \Bit^{[12]}_\msb \\ \quad \Vert \ R_F \as \Bit^{[10]}_\msb \\ \quad \Vert \ R_P \as \Bit^{[10]}_\msb \\ \quad \Vert \ 1^{[30]}$
Initializes the Grain-LFSR stream cipher which is used to derive $\RC$ for a Poseidon instance $(p, M, t)$.

$\overline{\underline{\textbf{Algorithm: } \RC}} \\ \line{1} \state \typecolon \Bit^{[80]} = \text{GrainState}_\init \\ \line{2} \textbf{do } 160 \timesb \bcolon \\ \line{3} \quad \bit \typecolon \Bit = \bigoplus_{\text{xor} \ i \hso \in \lbrace 0, 13, 23, 38, 51, 62 \rbrace} \thin \state[i] \\ \line{4} \quad \state = \state[1 \dotdot] \parallel \bit \\ \line{5} \RC \typecolon \Zp^{[Rt]} = [\ ] \\ \line{6} \while \len(\RC) < Rt \bcolon \\ \line{7} \quad \bits \typecolon \Bit^{[255]} = [\ ] \\ \line{8} \quad \while \len(\bits) < 255 \bcolon \\ \line{9} \quad\quad \bit_1 = \bigoplus_{\text{xor} \ i \hso \in \lbrace 0, 13, 23, 38, 51, 62 \rbrace} \thin \state[i] \\ \line{10} \quad\quad \state = \state[1 \dotdot] \parallel \bit_1 \\ \line{11} \quad\quad \bit_2 = \bigoplus_{\text{xor} \ i \hso \in \lbrace 0, 13, 23, 38, 51, 62 \rbrace} \thin \state[i] \\ \line{12} \quad\quad \state = \state[1 \dotdot] \parallel \bit_2 \\ \line{13} \quad\quad \ifb \bit_1 = 1 \bcolon \\ \line{14} \quad\quad\quad \bits\push(\bit_2) \\ \line{15} \quad c = \bits_\msb \as \mathbb{Z}_{2^{255}} \\ \line{16} \quad \if c \in \Zp \bcolon \\ \line{17} \quad\quad \RC\push(c) \\ \line{18} \return \RC$

MDS Matrix

$\constb \xb \typecolon \Zp^{[t]} = [0, \dots, t - 1] \\ \constb \yb \typecolon \Zp^{[t]} = [t, \dots, 2t - 1] \\~ \\ \constb \Mds \typecolon \Zp^{[t \times t]} = \begin{bmatrix} (\xb_0 + \yb_0)^{\neg 1} & \dots & (\xb_0 + \yb_{t - 1})^{\neg 1} \\ \vdots & \ddots & \vdots \\ (\xb_{t - 1} + \yb_0)^{\neg 1} & \dots & (\xb_{t - 1} + \yb_{t - 1})^{\neg 1} \end{bmatrix}$
The MDS matrix $\Mds$ for a Poseidon instance of width $t$. The superscript $^{\neg 1}$ denotes a multiplicative inverse $\text{mod } p$. The MDS matrix is invertible and symmetric.

Domain Separation

Every preimage hashed is associated with a hash type $\HashType$ to encode the Poseidon application, note that $\HashType$ is specified per preimage and does not specify a Poseidon instance.

Filecoin uses two hash types $\MerkleTree$ and $\ConstInputLen$ to designate a preimage as being for a Merkle tree of arity $t - 1$ or being for no specific application, but having a length $\len(\preimage) < t$.

The $\HashType$ determines the $\DomainTag$ and $\Padding$ used for a preimage, which give the first element of Poseidon’s initial state:

$$ \state = \DomainTag \parallel \preimage \parallel \Padding \quad. $$


$\constb \HashType \in \lbrace \MerkleTree, \ConstInputLen \rbrace$
The allowed hash types in which to hash a preimage for a Poseidon instance $(p, M, t)$. It is required that $1 \leq \len(\preimage) < t \hso$.

  • A $\HashType$ of $\MerkleTree$ designates a preimage as being the preimage of a Merkle tree hash function, where the tree is $(t {-} 1) \hso {:} \hso 1$ (i.e. $\text{arity} = \len(\preimage)$ number of nodes are hashed into $1$ node).
  • A $\HashType$ of $\ConstInputLen$ designates Poseidon as being used to hash preimages of length exactly $\len(\preimage)$ into a single output element (where $1 \leq \len(\preimage) < t$).

$\constb \DomainTag \typecolon \Zp = \begin{cases} 2^\text{arity} - 1 & \if \HashType = \MerkleTree \cr 2^{64} * \len(\preimage) & \if \HashType = \ConstInputLen \end{cases}$
Encodes the Poseidon application within the first Poseidon initial state element $\state[0]$ for a preimage.


$\constb \Padding \typecolon \Zp^{[*]} = \begin{cases} [\ ] & \if \HashType = \MerkleTree \cr 0^{[t - 1 - \len(\preimage)]} & \if \HashType = \ConstInputLen \end{cases}$
The padding that is applied to Poseidon’s initial state. A $\HashType$ of $\MerkleTree$ results in no applied padding; a $\HashType$ of $\ConstInputLen$ pads the last $t - 1 - \len(\preimage)$ elements of Poseidon’s initial $\state$ to zero.

Poseidon Hash Function

The Posiedon hash function takes a preimage of $t - 1$ prime field $\Zp$ elements to a single field element. Poseidon operates on an internal state $\state$ of $t$ field elements which, in the unoptimized algorithm, are transformed over $R$ number of rounds of: round constant addition, S-boxes, and MDS matrix mixing. Once all rounds have been performed, Poseidon outputs the second element of the state.

A Posiedon hash function is instantiated by a parameter triple $(p, M, t)$ which sets the prime field, the security level, and the size of Poseidon’s internal state buffer $\state$. From $(p, M, t)$ the remaining Poseidon parameters are computed $(\alpha, R_F, R_P, \RC, \Mds)$, i.e. the S-box exponent, the round numbers, the round constants, and the MDS matrix.

The S-box function is defined as:

$$ S: \Zp \rightarrow \Zp \newline S(x) = x^\alpha $$

The $\state$ is initialized to the concatenation of the $\DomainTag$, $\preimage$, and $\Padding$:

$$ \state = \DomainTag \parallel \preimage \parallel \Padding \quad. $$

Every round $r \in [R]$ begins with $t$ field additions of the $\state$ with that round’s constants $\RC_r \hso$:

$$ \state = \state \mathbin{\vec\oplus} \RoundConstants_r \quad. $$

If $r$ is a full round, i.e. $r < R_f$ or $r \geq R_f + R_P$, the S-box function is applied to each element of $\state$:

$$ \state = [\state[i]^\alpha]_{i \in [t]} $$

otherwise, if $r$ is a partial round $r \in [R_f, R_f + R_P)$, the S-box function is applied to the first $\state$ element exclusively:

$$ \state[0] = \state[0]^\alpha \quad. $$

Once the S-boxes have been applied for a round, the $\state$ is transformed via vector-matrix multiplication with the MDS matrix $\Mds$:

$$\state = \state \times \Mds \quad.$$

After $R$ rounds of the above procedure, Poseidon outputs the digest $\state[1]$.

$\overline{\underline{\Function \textsf{poseidon}(\preimage \typecolon \Zp^{[t - 1]}) \rightarrow \Zp}} \\ \line{1} \state \typecolon \Zp^{[t]} = \DomainTag \parallel \preimage \parallel \Padding \\ \line{2} \for r \in [R] \bcolon \\ \line{3} \quad \state = \state \mathbin{\vec\oplus} \RC_r \\ \line{4} \quad \ifb r \in [R_f] \textbf{ or } r \in [R_f + R_P, R) \bcolon \\ \line{5} \quad\quad \state = [\state[i]^\alpha]_{i \in [t]} \\ \line{6} \quad \else \\ \line{7} \quad\quad \state[0] = \state[0]^\alpha \\ \line{8} \quad \state = \state \times \Mds \\ \line{9} \return \state[1]$

Poseidon Algorithm

Optimizations

Filecoin’s rust library neptune implements the Poseidon hash function. The library differentiates between unoptimized and optimized Poseidon using the terms correct and static respectively.

The primary differences between the two versions are:

  • the unoptimized algorithm uses the round constants $\RC$, performs round constant addition before S-boxes, and uses the MDS matrix $\Mds$ for mixing
  • the optimized algorithm uses the transformed rounds constants $\RC’$ (containing fewer constants than $\RC$), performs a round constant addition before the first round’s S-box, performs round constant addition after every S-box other than the last round’s, and uses multiple matrices for MDS mixing $\Mds$, $\Pre$, and $\Sparse$. This change in MDS mixing from a non-sparse matrix $\Mds$ to sparse matrices $\Sparse$ greatly reduces the number of multiplications in each round.

For a given Poseidon instance $(p, M, t)$ the optimized and unoptimized algorithms will produce the same output when provided with the same input.

Optimized Round Constants

Given the round constants $\RC$ and MDS matrix $\Mds$ for a Poseidon instance, we are able to derive round constants $\RC’$ for the corresponding optimized Poseidon algorithm.

Optimized Round Constants

$\constb \RC’ \typecolon \Zp^{[tR_F + R_P]}$
The round constants for a Poseidon instance’s $(p, M, t)$ optimized hashing algorithm. Each full round is associated with $t$ round constants, while each partial round is associated with one constant.

$\overline{\underline{\textbf{Algorithm: } \RC'}} \\ \line{1} \RC' \typecolon \Zp^{[tR_F + R_P]} = [\ ] \\ \line{2} \RC'\extend(\RC_0) \\ \line{3} \for r \in [1, R_f) \bcolon \\ \line{4} \quad \RC'\extend(\RC_r \hso {\times} \hso \MdsInv) \\ \line{5} \textsf{partial\_consts} \typecolon \Zp^{[R_P]} = [\ ] \\ \line{6} \acc \typecolon \Zp^{[t]} = \RC_{R_f + R_P} \\ \line{7} \for r \in \reverse([R_f, R_f + R_P)) \bcolon \\ \line{8} \quad \acc' = \acc \hso {\times} \hso \MdsInv \\ \line{9} \quad \textsf{partial\_consts}\push(\acc'[0]) \\ \line{10} \quad \acc'[0] = 0 \\ \line{11} \quad \acc = \acc' \mathbin{\vec\oplus} \RC_r \\ \line{12} \RC'\extend(\acc \hso {\times} \hso \MdsInv) \\ \line{13} \RC'\extend(\reverse(\textsf{partial\_consts})) \\ \line{14} \for r \in [R_f + R_P + 1, R) \\ \line{15} \quad \RC'\extend(\RC_r \hso {\times} \hso \MdsInv) \\ \line{16} \return \RC'$

Algorithm Comments:
Note: $\times$ denotes a row vector-matrix multiplication which outputs a row vector.
Line 2. The first $t$ round constants are unchanged. Note that both $\RC_0’$ and $\RC_1’$ are used in the first optimized round $r = 0$.
Lines 3-4. For each first-half full round, transform the round constants into $\RC_r \hso {\times} \hso \MdsInv$.
Line 5. Create a variable to store the round constants for the partial rounds $\textsf{partial\_consts}$ (in reverse order).
Line 6. Create and initialize a variable $\acc$ that is transformed and added to $\RC_r$ in each $\textbf{do}$ loop iteration.
Lines 7-11. For each partial round $r$ (starting from the greatest partial round index $R_f + R_P - 1$ and proceeding to the least $R_f$) transform $\acc$ into $\acc \hso {\times} \hso \MdsInv$, take its first element as a partial round constant, then perform element-wise addition with $\RC_r$. The value of $\acc$ at the end of the $i^{th}$ loop iteration is:

$$ \acc_i = \RC_r[0] \parallel ((\acc_{i - 1} \hso {\times} \hso \MdsInv)[1 \dotdot] \mathbin{\vec\oplus} \RC_{r}[1 \dotdot]) $$

Line 12. Set the last first-half full round’s constants using the final value of $\acc$.
Line 13. Set the partial round constants.
Lines 14-15. Set the remaining full round constants.


$\constb \RC’_\text{pre} \typecolon \Zp^{[t]} = \RC’[\dotdot t]$
The first $t$ constants in $\RC’$ are added to $\state$ prior to applying the S-box in the first round round $r = 0$.

$\constb \RC_r' \typecolon \Zp^{[\ast]} = \begin{cases} \RC'[(r + 1)t \dotdot (r + 2)t] & \if \hso r \in [R_f] \cr \RC'[(R_f + 1)t + r_P] & \if r \in [R_f, R_f + R_P), \text{where } r_P = r - R_f \text{ is the partial round index } r_P \in [R_P] \cr \RC'[(r_F + 1)t + R_P \hso \dotdot \hso (r_F + 2)t + R_P] & \if r \in [R_f + R_P, R - 1), \text{where } r_F = r - R_P \text{ is the full round index } r_F \in [R_F - 1] \text{ (excludes the last full round)} \end{cases}$
For each round excluding the last $r \in [R - 1]$, $\RC_r’$ is added to the Poseidon $\state$ after that round’s S-box has been applied.

Sparse MDS Matrices

A sparse matrix $m$ is a square $n {\times} n$ matrix whose first row and first column are utilized and where all other entries are the $n {-} 1 {\times} n {-} 1$ identity matrix $\mathcal{I}_{n - 1}$:

$$ A = \left[ \begin{array}{c|c} A_{0, 0} & A_{0, 1 \dotdot} \cr \hline A_{1 \dotdot, 0} & \mathcal{I}{n - 1} \cr \end{array} \right] = \begin{bmatrix} A{0, 0} & \dots & \dots & A_{0, n - 1} \cr \vdots & 1 & \dots & 0 \cr \vdots & \vdots & \ddots & \vdots \cr A_{n - 1, 0} & 0 & \dots & 1 \cr \end{bmatrix} $$

The MDS matrix $\Mds$ is factored into a non-sparse matrix $\Pre$ and an array of sparse matrices $\Sparse = [\Sparse_0, \dots, \Sparse_{R_P - 1}]$ (one matrix per partial round). $\Pre$ is used in MDS mixing for the last first-half full round $r = R_f - 1$. Each matrix of $\Sparse$ is used in MDS mixing for a partial round. The first sparse matrix $\Sparse_0$ is used in the first partial round ($r = R_f$) and the last sparse matrix $\Sparse_{R_P - 1}$ is used in the last partial round ($r = R_f + i$).

$\constb \Pre \typecolon \Zp^{[t \times t]}$
The pre-sparse matrix (a non-sparse matrix) used in MDS mixing for the last full round of the first-half $r = R_f - 1$. Like the MDS matrix $\Mds$, the pre-sparse matrix $\Pre$ is symmetric.

$\constb \Sparse \typecolon {\Zp^{[t \times t]}}^{[R_P]}$
The array of sparse matrices that $\Mds$ is factored into, which are used for MDS mixing in the optimized partial rounds.

$\overline{\underline{\textbf{Algorithm: } \Pre, \Sparse}} \\ \line{1} \textsf{sparse} \typecolon {\Zp^{[t \times t]}}^{[R_P]} = [\ ] \\ \line{2} m \typecolon \Zp^{[t \times t]} = \Mds \\ \line{3} \do R_P \timesb \bcolon \\ \line{4} \quad (m', m'') \typecolon (\Zp^{[t \times t]}, \Zp^{[t \times t]}) = \textsf{sparse\_factorize}(m) \\ \line{5} \quad \textsf{sparse}\push(m'') \\ \line{6} \quad m = \Mds \times m' \\ \line{7} \Pre = m \\ \line{8} \Sparse = \reverse(\textsf{sparse}) \\ \line{9} \return \Pre, \Sparse$

Algorithm Comments:
Line 1. An array containing the sparse matrices that $\Mds$ is factored into.
Line 2. An array $m$ that is repeatedly factored into a non-sparse matrix $m’$ and a sparse matrix $m’’$, i.e. $m = m’ \times m’’$.
Lines 3-6. In each loop iteration we factor $m$ into $m’$ and $m’’$. The first $\textbf{do}$ loop iteration calculates the sparse matrix $m’’$ used in MDS mixing for last partial round $r = R_f + R_P - 1$. The last $\textbf{do}$ loop iteration calculates the sparse matrix $m’’$ used in MDS mixing for the first partial round $r = R_f$ (i.e. $\Sparse_0 = m’’$).
Line 6. $\Mds \times m’$ is a matrix-matrix multiplication which produces a $t {\times} t$ matrix.


The function $\textsf{sparse\_factorize}$ factors a non-sparse matrix $m$ into a non-sparse matrix $m’$ and sparse matrix $m’’$ such that $m = m’ \times m’’$.

$\overline{\underline{\Function \textsf{sparse\_factorize}(m \typecolon \Zp^{[t {\times} t]}) \rightarrow (m' \typecolon \Zp^{[t {\times} t]} \hso, m'' \typecolon \Zp^{[t {\times} t]})}} \\ \line{1} \hat{m} \typecolon \Zp^{[t - 1 {\times} t - 1]} = m_{1 \dotdot, 1 \dotdot} = \begin{bmatrix} m_{1, 1} & \dots & m_{1, t - 1} \\ \vdots & \ddots & \vdots \\ m_{t - 1, 1} & \dots & m_{t - 1, t - 1} \end{bmatrix} \\~ \\ \line{2} m' \typecolon \Zp^{[t {\times} t]} = \left[ \begin{array}{c|c} 1 & 0 \\ \hline 0 & \hat{m} \end{array} \right] = \begin{bmatrix} 1 & 0 & \dots & 0 \\ 0 & m_{1, 1} & \dots & m_{1, t - 1} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & m_{t - 1, 1} & \dots & m_{t - 1, t - 1} \end{bmatrix} \\~ \\ \line{3} \wb \typecolon \Zp^{[t - 1 {\times} 1]} = \hat{m}_{\ast, 0} = \begin{bmatrix} \hat{m}_{0, 0} \\ \vdots \\ \hat{m}_{t - 2, 0} \end{bmatrix} = m_{1 \dotdot, 1} = \begin{bmatrix} m_{1, 1} \\ \vdots \\ m_{t - 1, 1} \end{bmatrix} \\~ \\ \line{4} \hat\wb \typecolon \Zp^{[t - 1 {\times} 1]} = \hat{m}^{\neg 1} {\times} \hso \wb = \begin{bmatrix} {\hat{m}^{\neg 1}}_{0, \ast} \mathbin{\boldsymbol\cdot} \wb \\ \vdots \\ {\hat{m}^{\neg 1}}_{t - 2, \ast} \mathbin{\boldsymbol\cdot} \wb \end{bmatrix} \\~ \\ \line{5} m'' \typecolon \Zp^{[t {\times} t]} = \left[ \begin{array}{c|c} m_{0, 0} & m_{0, 1 \dotdot} \\ \hline \hat{\textbf{w}} & \mathcal{I}_{t - 1} \end{array} \right] = \begin{bmatrix} m_{0, 0} & \dots & \dots & m_{0, t - 1} \\ \hat{\textbf{w}}_0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \hat{\textbf{w}}_{t - 2} & 0 & \dots & 1 \end{bmatrix} \\ \line{6} \return m', m''$

Algorithm Comments:
Line 1. $\hat{m}$ is a submatrix of $m$ which excludes $m$’s first row and first column.
Line 2. $m’$ is a copy of $m$ where $m$’s first row and first column have been replaced with $[1, 0, \dots, 0]$.
Line 3. $\wb$ is a column vector which is the first column of $\hat{m}$ or the second column of $m$ excluding the first row’s value.
Line 4. $\hat\wb$ is the matrix-column vector product of $\hat{m}^{\neg 1}$ and $\wb$.
Line 5. $m’’$ is a sparse matrix whose first row is the first row of $m$, remaining first column is $\hat\wb$, and remaining entries are the identity matrix.

Optimized Poseidon

The optimized Poseidon hash function is instantiated in the same way as the unoptimized algorithm, however the optimized Poseidon algorithm requires the additional precomputation of round constants $\RC’$, a pre-sparse matrix $\Pre$, and sparse matrices $\Sparse$.

Prior to the first round $r = 0$, the $\state$ is initialized and added to the pre-first round constants $\RC_\text{pre}’$:

$$ \state = \DomainTag \parallel \preimage \parallel \Padding \newline \state = \state \mathbin{\vec\oplus} \RC_\text{pre}' $$

For each full round of the first-half $r \in [R_f]$: the S-box function is applied to each element of $\state$, the output of each S-box is added to the associated round constant in $\RC_r’$, and MDS mixing occurs between $\state$ and the MDS matrix $\Mds$ (when $r < R_f - 1$) or the pre-sparse matrix $\Pre$ (when $r = R_f - 1$).

$$ \state = \begin{cases} \lbrack \state[i]^\alpha \oplus \RC_r’ \lbrack i \rbrack \rbrack_{i \in [t]} \times \Mds & \text{if } r \in [R_f - 1] \cr \lbrack \state[i]^\alpha \oplus \RC_r’ \lbrack i \rbrack \rbrack_{i \in [t]} \times \Pre & \text{if } r = R_f - 1 \end{cases} $$

For each partial round $r \in [R_f, R_f + R_P)$ the S-box function is applied to the first $\state$ element, the round constant is added to the first $\state$ element, and MDS mixing occurs between the $\state$ and the $i^{th}$ sparse matrix $\Sparse_i$ (the $i^{th}$ partial round $i \in [R_P]$ is associated with sparse matrix $\Sparse_i$ where $i = r - R_f$):

$$ \state[0] = \state[0]^\alpha \oplus \RC_r’ \newline \state = \state \times \Sparse_{r - R_f} $$

The second half of full rounds $r \in [R_f + R_P, R)$ proceed in the same way as the first half of full rounds except that all MDS mixing uses the MDS matrix $\Mds$ and that the last round $r = R - 1$ does not add round constants into $\state$.

After performing $R$ rounds, Poseidon outputs the digest $\state[1]$.

$\overline{\underline{\Function \textsf{poseidon}(\preimage \typecolon \Zp^{[t - 1]}) \rightarrow \Zp}} \\ \line{1} \state \typecolon \Zp^{[t]} = \DomainTag \parallel \preimage \parallel \Padding \\ \line{2} \state = \state \mathbin{\vec\oplus} \RC'_\text{pre} \\ \line{3} \for r \in [R_f - 1] \bcolon \\ \line{4} \quad \state = [\state[i]^\alpha \oplus \RC'_r[i]]_{i \in [t]} \times \Mds \\ \line{5} \state = [\state[i]^\alpha \oplus \RC'_{R_f - 1}[i]]_{i \in [t]} \times \Pre \\ \line{6} \for r \in [R_f, R_f + R_P) \bcolon \\ \line{7} \quad \state[0] = \state[0]^\alpha \oplus \RC'_r \\ \line{8} \quad \state = \state \times \mathcal{S}_{r - R_f} \\ \line{9} \for r \in [R_f + R_P, R - 1) \bcolon \\ \line{10} \quad \state = [\state[i]^\alpha \oplus \RC'_r[i]]_{i \in [t]} \times \Mds \\ \line{11} \state = [\state[i]^\alpha]_{i \in [t]} \times \Mds \\ \line{12} \return \state[1]$

Algorithm Comments:
Line 1. Initialize the $\state$.
Line 2. Adds the pre- $r = 0$ round constants.
Lines 3-4. Performs all but the last first-half of full rounds $r \in [R_f - 1]$.
Line 5. Performs the last first-half full round $r = R_f - 1$.
Lines 6-8. Performs the partial rounds $r \in [R_f, R_f + R_P)$. Mixing in the $i^{th}$ partial round, where $i = r - R_f$, is done using the $i^{th}$ sparse matrix $\Sparse_{r - R_f}$.
Lines 9-10. Performs all but the last second-half full rounds $r \in [R_f + R_P, R - 1)$.
Line 11. Performs the last second-half rull round $r = R - 1$.

Optimized Poseidon Algorithm

Verified Clients

As described earlier, verified clients as a construction make the Filecoin Economy more robust and valuable. While a storage miner may choose to forgo deal payments and self-deal to fill their storage and earn block rewards, this is not as valuable to the economy and should not be heavily subsidized. However, in practice, it is impossible to tell useful data apart from encrypted zeros. Introducing verified clients pragmatically solves this problem through social trust and validation. There will be a simple and open verification process to become a verified client; this process should attract clients who will bring real storage demand to the Filecoin Economy.

Verifiers should eventually form a decentralized, globally distributed network of entities that confirms the useful storage demand of verified clients. If a verifier evaluates and affirms a client’s demand to have real data stored, that client will be able to add up to a certain amount of data to the network as verified client deals; this limit is called a DataCap allocation. Verified clients can request an increased DataCap once they have used their full allocation and Verifiers should perform some due diligence to ensure that the clients are not maliciously exploiting verification. The verification process will evolve over time to become more efficient, decentralized, and robust.

Storage demand on the network will shape the storage offering provided by miners. With the ability to deploy data with a greater sector quality multiplier, verified clients play an even more important role in shaping the quality of service, geographic distribution, degree of decentralization, and consensus security of the network. Verifiers and verified clients must be cognizant of the value and responsibility that come with their role. Additionally, it is conceivable for miners to have a business development team to source valuable and useful datasets in the world, growing demand for the storage they provide. Teams would be incentivized to help their clients through the verification process and start storing data on the Filecoin Network, in addition to providing their clients with strong SLAs.

Filecoin CryptoEconomics

The Filecoin network is a complex multi-agent economic system. The CryptoEconomics of Filecoin touch on most parts of the system. As such related mechanisms and details have been included in many places across this specification. This section aims to explain in more detail the mechanisms and parameters of the system that contribute to the overall network-level goals.

Next, we provide the parameters of the cryptoeconomic model. It is advised that the reader refers to the following sections that are closely related to the Filecoin CryptoEconomic model.

Initial Parameter Recommendation

Economic analyses and models were developed to design, validate, and parameterize the mechanisms described in the sections listed above. Cryptoeconomics is a young field, where global expertise is both sparse and shallow. Developing these recommendations is advancing the state of the art, not only in the field of decentralized storage networks, but also of cryptoeconomic mechanism design as a wider discipline.

The following table summarizes initial parameter recommendations for Filecoin. Monitoring, testing, validation and recommendations will continue to evolve and adapt. When changes to these parameters are due they will be announced and applied through FIPs.

Parameter Value
Baseline Storage Amount Initial Value 2.88888888 EB
Baseline Storage Amount Function 100%
Percent simple minting vs baseline minting 30% / 70%
Reward delay and linear vesting period 0 days
Linear vesting period 180 days
Sector quality multipliers Committed Capacity: 1x
Regular Deals: 1x
Verified Client Deals: 10x
Initial pledge function 20 days worth of block reward +
share of 30% qa power-normalized circulating supply
Initial Pledge Cap 1FIL/32GiB QA Power
Minimum sector lifetime 180 days
Maximum sector lifetime 540 days
Minimum deal duration 180 days
Maximum deal duration 540 days
Sector Fault Fee 2.14 days worth of block reward
Sector Fault Detection Fee 1.5 days worth of estimated block reward
Sector Termination Fee Estimated number of days of block reward that a sector has earned; capped at 90 days
Minimum Client Deal Collateral 0
Minimum Provider Deal Collateral share of 1% raw byte-normalised circulating supply
Network Gas Fee Dynamic fee structure based on network congestion

Design Principles Justification

Baseline Minting: Filecoin tokens are a limited resource. The rate at which tokens are deployed into the network should be controlled to maximize their net benefit to the community, just like the consumption of any exhaustible common-pool resource. The purpose of baseline minting is to: (a) reward participants proportionally to the storage they provide rather than exponentially, based on the time when they joined the network, and (b) to adjust the minting rate based on approximated network utility in order to maintain a relatively steady flow of block rewards over longer time periods.

Initial Pledge: The justification for having an initial pledge is as follows: firstly, having an initial pledge forces miners to behave responsibly on their sector commitments and holds them accountable for not keeping up to their promise, even before they earn any block reward. Secondly, requiring a pledge of stake in the network supports and enhances the security of the consensus mechanism.

Block Reward Vesting: In order to reduce the initial pledge requirement of a sector, the network considers all vesting block rewards as collateral. However, tracking block rewards on a per-sector level is not scalable. Instead, the protocol tracks rewards at a per-miner level and linearly vests block rewards over a fixed duration.

Minimum Sector Lifetime: The justification for a minimum sector lifetime is as follows. Committing a sector to the Filecoin Network currently requires a moderately computationally-expensive “sealing” operation up-front, whose amortized cost is lower if the sector’s lifetime is longer. In addition, a sector commitment will involve on-chain messages, for which gas fees will be paid. The net effect of these messages costs will be subsidized by the block reward, but only for sectors that will contribute to the network and earn rewards for a sufficiently long duration. Under current constraints, short-lived sectors would reduce the overall capacity of the network to deliver useful storage over time.

Sector Fault Fee: If stored sectors are withdrawn from the network only temporarily, a substantial fraction of those sectors’ value may be recovered in case the data storage is quickly restored to normal operation — this means that the network need not levy a termination fee immediately. However, even temporary interruptions can be disruptive, and also damage confidence about whether the sector is recoverable or permanently lost. In order to account for this situation, the network charges a much smaller fee per day that a sector is not being proven as promised (until enough days have passed that the network writes off that sector as terminated).

Sector Fault Detection Fee: If a sector is temporarily damaged, storage miners are expected to proactively detect, report, and repair the fault. An unannounced interruption in service is both more disruptive for clients and more of a signal that the fault may not have been caught early enough to fully recover. Finally, dishonest storage miners may have some chance of briefly evading detection and earning rewards despite being out of service. For all these reasons, a higher penalty is applied when the network detects an undeclared fault.

Sector Termination Fee: The ultimate goal of the Filecoin Network is to provide useful data storage. Use-cases for unreliable data storage, which may vanish without warning, are much rarer than use-cases for reliable data storage, which is guaranteed in advance to be maintained for a given duration. So to the extent that committed sectors disappear from the network, most of the value provided by those sectors is canceled out, in most cases. If storage miners had little to lose by terminating active sectors compared to their realized gains, then this would be a negative externality that fails to be effectively managed by the storage market; termination fees internalize this cost.

Glossary

Account Actor

The Account Actor is responsible for user accounts.

Actor

The Actor is the Filecoin equivalent of the smart contract in Ethereum.

An actor is an on-chain object with its own state and set of methods. An actor’s state is persisted in the on-chain state tree, keyed by its address. All actors (miner actors, the storage market actor, account actors) have an address. Actor’s methods are invoked by crafting messages and getting miners to include them in blocks.

There eleven (11) builtin System Actors in total in the Filecoin System.

Address

An address is an identifier that refers to an actor in the Filecoin state.

In the Filecoin network, an address is a unique cryptographic value that serves to publicly identify a user. This value, which is a public key, is paired with a corresponding private key. The mathematical relationship between the two keys is such that access to the private key allows the creation of a signature that can be verified with the public key. Filecoin employs the Boneh–Lynn–Shacham (BLS) signature scheme for this purpose.

Ask

An ask contains the terms on which a miner is willing to provide services. Storage asks, for example, contain price and other terms under which a given storage miner is willing to lease its storage. The word comes from stock market usage, shortened from asking price.

Block

In a blockchain, a block is the fundamental unit of record. Each block is cryptographically linked to one or more previous blocks. Blocks typically contain messages that apply changes to the previous state (for example, financial records) tracked by the blockchain. A block represents the state of the network at a given point in time.

Block Height

The height of a block corresponds to the number of epochs elapsed from genesis before the block was added to the blockchain. That said, height and epoch are synonymous. The height of the Filecoin blockchain is defined to be the maximum height of any block in the blockchain.

Block Reward

The reward in FIL given to storage miners for contributing to the network with storage and proving that they have stored the files they have committed to store. The Block Reward is allocated to the storage miners that mine blocks and extend the blockchain.

Blockchain

A blockchain is a system of record in which new records, or blocks are cryptographically linked to preceding records. This construction is a foundational component of secure, verifiable, and distributed transaction ledgers.

Bootstrapping

Bootstrapping traditionally refers to the process of starting a network. In the context of the Filecoin network bootstrapping refers to the process of onboarding a new Filecoin node in the Filecoin network and relates to connecting the new node to other peers, synchronizing the blockchain and “catching up” with the current state.

Capacity commitment

If a storage miner doesn’t find any available deal proposals appealing, they can alternatively make a capacity commitment, filling a sector with arbitrary data, rather than with client data. Maintaining this sector allows the storage miner to provably demonstrate that they are reserving space on behalf of the network. Also referred to as Committed Capacity (CC).

Challenge Sampling

An algorithm for challenge derivation used in Proof of Replication or Proof of SpaceTime.

CID

CID is short for Content Identifier, a self describing content address used throughout the IPFS ecosystem. CIDs are used in Filecoin to identify files submitted to the decentralized storage network. For more detailed information, see the github documentation for it.

Client

There are two types of clients in Filecoin, the storage client and the retrieval client, both of which can be implemented as part of the same physical host. All clients have an account, which is used to pay for the storage or retrieval of data.

Collateral

Collateral is Filecoin tokens pledged by an actor as a commitment to a promise. If the promise is respected, the collateral is returned. If the promise is broken, the collateral is not returned in full.

In order to enter into a storage deal, a storage miner is required to provide FIL as collateral, to be paid out as compensation to a client in the event that the miner fails to uphold their storage commitment.

Consensus

The algorithm(s) and logic needed so that the state of the blockchain is agreed across all nodes in the network.

Consensus Fault Slashing

Consensus Fault Slashing is the penalty that a miner incurs for committing consensus faults. This penalty is applied to miners that have acted maliciously against the network’s consensus functionality.

Cron Actor

The Cron Actor is a scheduler actor that runs critical functions at every epoch.

Deal

Two participants in the Filecoin network can enter into a deal in which one party contracts the services of the other for a given price agreed between the two. The Filecoin specification currently details storage deals (in which one party agrees to store data for the other for a specified length of time) and retrieval deals (in which one party agrees to transmit specified data to the other).

Deal Quality Multiplier

This factor is assigned to different deal types (committed capacity, regular deals, and verified client deals) to reward different content.

Deal Weight

This weight converts spacetime occupied by deals into consensus power. Deal weight of verified client deals in a sector is called Verified Deal Weight and will be greater than the regular deal weight.

DRAND

DRAND, short for Distributed Randomness, is a publicly verifiable random beacon protocol that Filecoin uses as a source of unbiasable entropy for leader election. See the DRAND website for more details.

Election

On every epoch, a small subset of Filecoin storage miners are elected to mine a new, or a few new block(s) for the Filecoin blockchain. A miner’s probability of being elected is roughly proportional to the share of the Filecoin network’s total storage capacity they contribute. Election in Filecoin is realized through Expected Consensus.

Election Proof

Election Proof is used as a source of randomness in EC leader election. The election proof is created by calling VRF and giving the secret key of the miner’s worker and the DRAND value of the current epoch as input.

Epoch

Time in the Filecoin blockchain is discretized into epochs that are currently set to thirty (30) seconds in duration. On every epoch, a subset of storage miners are elected to each add a new block to the Filecoin blockchain via Winning Proof-of-Spacetime. Also referred to as Round.

Fault

A fault occurs when a proof is not posted in the Filecoin system within the proving period, denoting another malfunction such as loss of network connectivity, storage malfunction, or malicious behaviour.

When a storage miner fails to complete Window Proof-of-Spacetime for a given sector, the Filecoin network registers a fault for that sector, and the miner is slashed. If a storage miner does not resolve the fault quickly, the network assumes they have abandoned their commitment.

FIL

FIL is the name of the Filecoin unit of currency; it is alternatively denoted by the Unicode symbol for an integral with a double stroke (⨎).

File

Files are what clients bring to the filecoin system to store. A file is converted to a UnixFS DAG and is placed in a piece. A piece is the basic unit of account in the storage network and is what is actually stored by the Filecoin network.

Filecoin

The term Filecoin is used generically to refer to the Filecoin project, protocol, and network.

Finality

Finality is a well known concept in blockchain environments and refers to the amount of time needed until having a reasonable guarantee that a message cannot be reversed or cancelled. It is measured in terms of delay, normally in epochs or rounds from the point when a message has been included in a block published on-chain.

fr32

The term fr32 is derived from the name of a struct that Filecoin uses to represent the elements of the arithmetic field of a pairing-friendly curve, specifically Bls12-381—which justifies use of 32 bytes. F stands for “Field”, while r is simply a mathematic letter-as-variable substitution used to denote the modulus of this particular field.

Gas, Gas Fees

Gas is a property of a message, corresponding to the resources involved in including that message in a given block. For each message included in a block, the block’s creator (i.e., miner) charges a fee to the message’s sender.

Genesis Block

The genesis block is the first block of the Filecoin blockchain. As is the case with every blockchain, the genesis block is the foundation of the blockchain. The tree of any block mined in the future should link back to the genesis block.

GHOST

GHOST is an acronym for Greedy Heaviest Observable SubTree, a class of blockchain structures in which multiple blocks can validly be included in the chain at any given height or round. GHOSTy protocols produce blockDAGs rather than blockchains and use a weighting function for fork selection, rather than simply picking the longest chain.

Height

Same as Block Height.

Init Actor

The Init Actor initializes new actors and records the network name.

Lane

Lanes are used to split Filecoin Payment Channels as a way to update the channel state (e.g., for different services exchanged between the same end-points/users). The channel state is updated using vouchers with every lane marked with an associated nonce and amount of tokens it can be redeemed for.

Leader

A leader, in the context of Filecoin consensus, is a node that is chosen to propose the next block in the blockchain during Leader Electioni.

Leader election

Same as Election.

Message

A message is a call to an actor in the Filecoin VM. The term message is used to refer to data stored as part of a block. A block can contain several messages.

Miner

A miner is an actor in the Filecoin system performing a service in the network for a reward.

There are three types of miners in Filecoin:

  • Storage miners, who store files on behalf of clients.
  • Retrieval miners, who deliver stored files to clients.
  • Repair miners, who replicate files to keep them available in the network, when a storage miner presents a fault.

Multisig Actor

The Multisig Actor (or Multi-Signature Wallet Actor) is responsible for dealing with operations involving the Filecoin wallet.

Node

A node is a communication endpoint that implements the Filecoin protocol.

On-chain/off-chain

On-chain actions are those that change the state of the tree and the blockchain and interact with the Filecoin VM. Off-chain actions are those that do not interact with the Filecoin VM.

Payment Channel

A payment channel is set up between actors in the Filecoin system to enable off-chain payments with on-chain guarantees, making settlement more efficient. Payment channels are managed by the Payment Channel Actor, who is responsible for setting up and settling funds related to payment channels.

Piece

The Piece is the main unit of account and negotiation for the data that a user wants to store on Filecoin. In the Lotus implementation, a piece is a CAR file produced by an IPLD DAG with its own payload CID and piece CID. However, a piece can be produced in different ways as long as the outcome matches the piece CID. A piece is not a unit of storage and therefore, it can be of any size up to the size of a sector. If a piece is larger than a sector (currently set to 32GB or 64GB and chosen by the miner), then it has to be split in two (or more) pieces. For more details on the exact sizing of a Filecoin Piece as well as how it can be produced, see the Piece section.

Pledged Storage

Storage capacity (in terms of sectors)that a miner has promised to reserve for the Filecoin network via Proof-of-Replication is termed pledged storage.

Power

See Power Fraction.

Power Fraction

A storage miner’s Power Fraction or Power is the ratio of their committed storage, as of their last PoSt submission, over Filecoin’s total committed storage as of the current block. It is used in Leader Election. It is the proportion of power that a storage miner has in the system as a fraction of the overall power of all miners.

Power Table

The Power Table is an abstraction provided by the Filecoin storage market that lists the power of every storage miner in the system.

Protocol

Commonly refers to the “Filecoin Protocol”.

Proving Period

Commonly referred to as the “duration of a PoSt”, the proving period is the period of time during which storage miners must compute Proofs of Spacetime. By the end of the period they must have submitted their PoSt.

Proving Set

The elements used as input by a proof of Spacetime to enable a proof to be generated.

Proof of Replication (PoRep)

Proof-of-Replication is a procedure by which a storage miner can prove to the Filecoin network that they have created a unique copy of some piece of data on the network’s behalf. PoRep is used in the Filecoin system to generate sealed sectors through which storage miners prove they hold client data.

Proof of Spacetime (PoSt)

Proof-of-Spacetime is a procedure by which a storage-miner can prove to the Filecoin network they have stored and continue to store a unique copy of some data on behalf of the network for a period of time. Proof-of-Spacetime manifests in two distinct varieties in the present Filecoin specification: Window Proof-of-Spacetime and Winning Proof-of-Spacetime.

Quality-Adjusted Power

This parameter measures the consensus power of stored data on the network, and is equal to Raw Byte Power multiplied by Sector Quality Multiplier.

Randomness

Randomness is used in Filecoin in order to generate random values for electing the next leader and prevent malicious actors from predicting future and gaining advantage over the system. Random values are drawn from a DRAND beacon and appropriately formatted for usage.

Randomness Ticket

See Ticket.

Raw Byte Power

This measurement is the size of a sector in bytes.

Retrieval miner

A retrieval miner is a Filecoin participant that enters in retrieval deals with clients, agreeing to supply a client with a particular file in exchange for FIL. Note that unlike storage miners, retrieval miners are not additionally rewarded with the ability to add blocks to (i.e., extend) the Filecoin blockchain; their only reward is the fee they extract from the client.

Repair

Repair refers to the processes and protocols by which the Filecoin network ensures that data that is partially lost (by, for example, a miner disappearing) can be re-constructed and re-added to the network. Repairing is done by Repair Miners.

Reward Actor

The Reward Actor is responsible for distributing block rewards to storage miners and token vesting.

Round

A Round is synonymous to the epoch and is the time period during which new blocks are mined to extend the blockchain. The duration is of a round is set to 30 sec.

Seal

Sealing is a cryptographic operation that transforms a sector packed with deals into a certified replica associated with: i) a particular miner’s cryptographic identity, ii) the sector’s own identity.

Sealing is one of the fundamental building blocks of the Filecoin protocol. It is a computation-intensive process performed over a sector that results in a unique representation of the sector as it is produced by a specific miner. The properties of this new representation are essential to the Proof-of-Replication and the Proof-of-Spacetime procedures.

Sector

The sector is the default unit of storage that miners put in the network (currently 32GBs or 64GBs). A sector is a contiguous array of bytes that a storage miner puts together, seals, and performs Proofs of Spacetime on. Storage miners store data on behalf of the Filecoin network in fixed-size sectors.

Sectors can contain data from multiple deals and multiple clients. Sectors are also split in “Regular Sectors”, i.e., those that contain deals and “Committed Capacity” (CC), i.e., the sectors/storage that have been made available to the system, but for which a deal has not been agreed yet.

Sector Quality Multiplier

Sector quality is assigned on Activation (the epoch when the miner starts proving theyʼre storing the file). The sector quality multiplier is computed as an average of deal quality multipliers (committed capacity, regular deals, and verified client deals), weighted by the amount of spacetime each type of deal occupies in the sector.

Sector Spacetime

This measurement is the sector size multiplied by its promised duration in byte-epochs.

Slashing

Filecoin implements two kinds of slashing: Storage Fault Slashing and Consensus Fault Slashing.

Smart contracts

In the Filecoin blockchain smart contracts are referred to as actors.

State

The State or State Tree refers to the shared history of the Filecoin system which contains actors and their storage power. The State is deterministically generated from the initial state and the set of messages generated by the system.

Storage Market Actor

The Storage Market Actor is responsible for managing storage and retrieval deals.

Storage Miner Actor

The Storage Miner Actor commits storage to the network, stores data on behalf of the network and is rewarded in FIL for the storage service. The storage miner actor is responsible for collecting proofs and reaching consensus on the latest state of the storage network. When they create a block, storage miners are rewarded with newly minted FIL, as well as the message fees they can levy on other participants seeking to include messages in the block.

Storage Power Actor

The Storage Power Actor is responsible for keeping track of the storage power allocated at each storage miner.

Storage Fault Slashing

Storage Fault Slashing is a term that is used to encompass a broader set of penalties, including (but not limited to) Fault Fees, Sector Penalties, and Termination Fees. These penalties are to be paid by miners if they fail to provide sector reliability or decide to voluntarily exit the network.

  • Fault Fee (FF): A penalty that a miner incurs for each day a miner’s sector is offline.
  • Sector Penalty (SP): A penalty that a miner incurs for a faulted sector that was not declared faulted before a WindowPoSt check occurs.
    • The sector will pay FF after incurring an SP when the fault is detected.
  • Termination Penalty (TP): A penalty that a miner incurs when a sector is voluntarily or involuntarily terminated and is removed from the network.

Ticket or VRF Chain

Tickets are generated as in Election Proof, but the input of every ticket includes the concatenation of the previous ticket, hence the term chain. This means that the new ticket is generated by running the VRF on the old ticket concatenated with the new DRAND value (and the key as with the Election Proof).

Tipset

A tipset is a set of blocks that each have the same height and parent tipset; the Filecoin blockchain is a chain of tipsets, rather than a chain of blocks.

Each tipset is assigned a weight corresponding to the amount of storage the network is provided per the commitments encoded in the tipset’s blocks. The consensus protocol of the network directs nodes to build on top of the heaviest chain.

By basing its blockchain on tipsets, Filecoin can allow multiple storage miners to create blocks in the same epoch, increasing network throughput. By construction, this also provides network security: a node that attempts to intentionally prevent the valid blocks of a second node from making it onto the canonical chain runs up against the consensus preference for heavier chains.

Verified client

To further incentivize the storage of “useful” data over simple capacity commitments, storage miners have the additional opportunity to compete for special deals offered by verified clients. Such clients are certified with respect to their intent to offer deals involving the storage of meaningful data, and the power a storage miner earns for these deals is augmented by a multiplier.

Verified Registry Actor

The Verified Registry Actor is responsible for managing verified clients.

VDF

A Verifiable Delay Function that guarantees a random delay given some hardware assumptions and a small set of requirements. These requirements are efficient proof verification, random output, and strong sequentiality. Verifiable delay functions are formally defined by BBBF.

{proof, value} <-- VDF(public parameters, seed)

(Filecoin) Virtual Machine (VM)

The Filecoin VM refers to the system by which changes are applied to the Filecoin system’s state. The VM takes messages as input, and outputs updated state. The four main Actors interact with the Filecoin VM to update the state. These are: the InitActor, the CronActor, the AccountActor and the RewardActor.

Voucher

Vouchers are used as part of the Payment Channel Actor. Vouchers are signed messages exchanged between the channel creator and the channel recipient to acknowledge that a part of the service has been completed. Vouchers are the realisation of micropayments or checkpoints ini a payment channel. Vouchers are submitted to the blockchain and when Collected, funds are moved from the channel creator’s account to the channel recipient’s account.

VRF

A Verifiable Random Function (VRF) that receives {Secret Key (SK), seed} and outputs {proof of correctness, output value}. VRFs must yield a proof of correctness and a unique & efficiently verifiable output.

{proof, value} <-- VRF(SK, seed)

Weight

Every mined block has a computed weight, also called its WinCount. Together, the weights of all the blocks in a branch of the chain determines the cumulative weight of that branch. Filecoin’s Expected Consensus is a GHOSTy or heaviest-chain protocol, where chain selection is done on the basis of an explicit weighting function. Filecoin’s weight function currently seeks to incentivize collaboration amongst miners as well as the addition of storage to the network. The specific weighting function is defined in Chain Weighting.

Window Proof-of-Spacetime (WindowPoSt)

Window Proof-of-Spacetime (WindowPoSt) is the mechanism by which the commitments made by storage miners are audited. It sees each 24-hour period broken down into a series of windows. Correspondingly, each storage miner’s set of pledged sectors is partitioned into subsets, one subset for each window. Within a given window, each storage miner must submit a Proof-of-Spacetime for each sector in their respective subset. This requires ready access to each of the challenged sectors, and will result in a zk-SNARK-compressed proof published to the Filecoin blockchain as a message in a block. In this way, every sector of pledged storage is audited at least once in any 24-hour period, and a permanent, verifiable, and public record attesting to each storage miner’s continued commitment is kept.

The Filecoin network expects constant availability of stored data. Failing to submit WindowPoSt for a sector will result in a fault, and the storage miner supplying the sector will be slashed.

Winning Proof-of-Spacetime (WinningPoSt)

Winning Proof-of-Spacetime (WinningPoSt) is the mechanism by which storage miners are rewarded for their contributions to the Filecoin network. At the beginning of each epoch, a small number of storage miners are elected to each mine a new block. As a requirement for doing so, each miner is tasked with submitting a compressed Proof-of-Spacetime for a specified sector. Each elected miner who successfully creates a block is granted FIL, as well as the opportunity to charge other Filecoin participants fees to include messages in the block.

Storage miners who fail to do this in the necessary window will forfeit their opportunity to mine a block, but will not otherwise incur penalties for their failure to do so.

zk-SNARK

zk-SNARK stands for Zero-Knowledge Succinct Non-Interactive Argument of Knowledge.

An argument of knowledge is a construction by which one party, called the prover, can convince another, the verifier, that the prover has access to some piece of information. There are several possible constraints on such constructions:

  • A non-interactive argument of knowledge has the requirement that just a single message, sent from the prover to the verifier, should serve as a sufficient argument.

A zero-knowledge argument of knowledge has the requirement that the verifier should not need access to the knowledge the prover has access to in order to verify the prover’s claim.

A succinct argument of knowledge is one that can be “quickly” verified, and which is “small”, for appropriate definitions of both of those terms.

A Zero-Knowledge Succinct Non-Interactive Argument of Knowledge (zk-SNARK) embodies all of these properties. Filecoin utilizes these constructions to enable its distributed network to efficiently verify that storage miners are storing files they pledged to store, without requiring the verifiers to maintain copies of these files themselves.

In summary, Filecoin uses zk-SNARKs to produce a small ‘proof’ that convinces a ‘verifier’ that some computation on a stored file was done correctly, without the verifier needing to have access to the stored file itself.

Appendix

Filecoin Address

A Filecoin address is an identifier that refers to an actor in the Filecoin state. All actors (miner actors, the storage market actor, account actors) have an address. This address encodes information about the network to which an actor belongs, the specific type of address encoding, the address payload itself, and a checksum. The goal of this format is to provide a robust address format that is both easy to use and resistant to errors.

Note that each ActorAddress in the protocol contains a unique ActorID given to it by the InitActor. Throughout the protocol, actors are referenced by their ID-addresses. ID-addresses are computable from IDs (using makeIdAddress(id)), and vice versa (using the AddressMap to go from addr to ID).

Most actors have an alternative address which is not their key in the state tree, but is resolved to their ID-address during message processing.

Accounts have a public key-based address (e.g. to receive funds), and non-singleton actors have a temporary reorg-proof address.

An account actor’s crypto-address (for signature verification) is found by looking up its actor state, keyed by the canonical ID-address. There is no map from ID-address to pubkey address.

The reference implementation of the Filecoin Address can be found in the go-address Github repository.

Design criteria

  1. Identifiable: The address must be easily identifiable as a Filecoin address.
  2. Reliable: Addresses must provide a mechanism for error detection when they might be transmitted outside the network.
  3. Upgradable: Addresses must be versioned to permit the introduction of new address formats.
  4. Compact: Given the above constraints, addresses must be as short as possible.

Specification

There are 2 ways a filecoin address can be represented. An address appearing on chain will always be formatted as raw bytes. An address may also be encoded to a string, this encoding includes a checksum and network prefix. An address encoded as a string will never appear on chain, this format is used for sharing among humans.

Bytes

When represented as bytes a filecoin address contains the following:

  • A protocol indicator byte that identifies the type and version of this address.
  • The payload used to uniquely identify the actor according to the protocol.
|----------|---------|
| protocol | payload |
|----------|---------|
|  1 byte  | n bytes |

String

When encoded to a string a filecoin address contains the following:

  • A network prefix character that identifies the network the address belongs to.
  • A protocol indicator byte that identifies the type and version of this address.
  • A payload used to uniquely identify the actor according to the protocol.
  • A checksum used to validate the address.
|------------|----------|---------|----------|
|  network   | protocol | payload | checksum |
|------------|----------|---------|----------|
| 'f' or 't' |  1 byte  | n bytes | 4 bytes  |

Network Prefix

The network prefix is prepended to an address when encoding to a string. The network prefix indicates which network an address belongs to. The network prefix may either be f for filecoin mainnet or t for filecoin testnet. It is worth noting that a network prefix will never appear on chain and is only used when encoding an address to a human readable format.

Protocol Indicator

The protocol indicator byte describes how a method should interpret the information in the payload field of an address. Any deviation for the algorithms and data types specified by the protocol must be assigned a new protocol number. In this way, protocols also act as versions.

  • 0 : ID
  • 1 : SECP256K1 Public Key
  • 2 : Actor
  • 3 : BLS Public Key

An example description in golang:

// Protocol byte
type Protocol = byte

const (
	ID Protocol = iota
	SECP256K1
	Actor
	BLS
)
Protocol 0: IDs

Protocol 0 addresses are simple IDs. All actors have a numeric ID even if they don’t have public keys. The payload of an ID address is base10 encoded. IDs are not hashed and do not have a checksum.

Bytes

|----------|---------------|
| protocol |    payload    |
|----------|---------------|
|    0     | leb128-varint |

String

|------------|----------|---------------|
|  network   | protocol |    payload    |
|------------|----------|---------------|
| 'f' or 't' |    '0'   | leb128-varint |
                  base10[...............]
Protocol 1: libsecpk1 Elliptic Curve Public Keys

Protocol 1 addresses represent secp256k1 public encryption keys. The payload field contains the Blake2b 160 hash of the uncompressed public key (65 bytes).

Bytes

|----------|----------------------------------|
| protocol |               payload            |
|----------|----------------------------------|
|    1     | blake2b-160( PubKey [65 bytes] ) |

String

|------------|----------|--------------------------------|----------|
|  network   | protocol |      payload                   | checksum |
|------------|----------|--------------------------------|----------|
| 'f' or 't' |    '1'   | blake2b-160(PubKey [65 bytes]) |  4 bytes |
                  base32[...........................................]
Protocol 2: Actor

Protocol 2 addresses representing an Actor. The payload field contains the SHA256 hash of meaningful data produced as a result of creating the actor.

Bytes

|----------|---------------------|
| protocol |        payload      |
|----------|---------------------|
|    2     | 	SHA256(Random) 	 |

String

|------------|----------|-----------------------|----------|
|  network   | protocol |         payload       | checksum |
|------------|----------|-----------------------|----------|
| 'f' or 't' |    '2'   |  	SHA256(Random)  	|  4 bytes |
                  base32[..................................]
Protocol 3: BLS

Protocol 3 addresses represent BLS public encryption keys. The payload field contains the BLS public key.

Bytes

|----------|---------------------|
| protocol |        payload      |
|----------|---------------------|
|    3     | 48 byte BLS PubKey  |

String

|------------|----------|---------------------|----------|
|  network   | protocol |      payload        | checksum |
|------------|----------|---------------------|----------|
| 'f' or 't' |    '3'   |  48 byte BLS PubKey |  4 bytes |
                  base32[................................]

Payload

The payload represents the data specified by the protocol. All payloads except the payload of the ID protocol are base32 encoded using the lowercase alphabet when seralized to their human readable format.

Checksum

Filecoin checksums are calculated over the address protocol and payload using blake2b-4. Checksums are base32 encoded and only added to an address when encoding to a string. Addresses following the ID Protocol do not have a checksum.

Expected Methods

All implementations in Filecoin must have methods for creating, encoding, and decoding addresses in addition to checksum creation and validation. The follwing is a golang version of the Address Interface:

func New(protocol byte, payload []byte) Address

type Address interface {
	Encode(network Network, a Adress) string
	Decode(s string) Address
	Checksum(a Address) []byte
	ValidateChecksum(a Address) bool
}
New()

New() returns an Address for the specified protocol encapsulating corresponding payload. New fails for unknown protocols.

func New(protocol byte, payload []byte) Address {
	if protocol < SECP256K1 || protocol > BLS {
		Fatal(ErrUnknownType)
	}
	return Address{
		Protocol: protocol,
		Payload:  payload,
	}
}
Encode()

Software encoding a Filecoin address must:

  • produce an address encoded to a known network
  • produce an address encoded to a known protocol
  • produce an address with a valid checksum (if applicable)

Encodes an Address as a string, prepending the network prefix, calculating the checksum, and encoding the payload and checksum to base32.

func Encode(network string, a Address) string {
	if network != "f" && network != "t" {
		Fatal("Invalid Network")
	}

	switch a.Protocol {
	case SECP256K1, Actor, BLS:
		cksm := Checksum(a)
		return network + string(a.Protocol) + base32.Encode(a.Payload+cksm)
	case ID:
		return network + string(a.Protocol) + base10.Encode(leb128.Decode(a.Payload))
	default:
		Fatal("invalid address protocol")
	}
}
Decode()

Software decoding a Filecoin address must:

  • verify the network is a known network.
  • verify the protocol is a number of a known protocol.
  • verify the checksum is valid

Decode an Address from a string by removing the network prefix, validating the address is of a known protocol, decoding the payload and checksum, and validating the checksum.

func Decode(a string) Address {
	if len(a) < 3 {
		Fatal(ErrInvalidLength)
	}

	if a[0] != "f" && a[0] != "t" {
		Fatal(ErrUnknownNetwork)
	}

	protocol := a[1]
	raw := a[2:]
	if protocol == ID {
		return Address{
			Protocol: protocol,
			Payload:  leb128.Encode(base10.Decode(raw)),
		}
	}

	raw = base32.Decode(raw)
	payload = raw[:len(raw)-CksmLen]
	if protocol == SECP256K1 || protocol == Actor {
		if len(payload) != 20 {
			Fatal(ErrInvalidBytes)
		}
	}

	cksm := payload[len(payload)-CksmLen:]
	if !ValidateChecksum(a, cksm) {
		Fatal(ErrInvalidChecksum)
	}

	return Address{
		Protocol: protocol,
		Payload:  payload,
	}
}
Checksum()

Checksum produces a byte array by taking the blake2b-4 hash of an address protocol and payload.


func Checksum(a Address) [4]byte {
	blake2b4(a.Protocol + a.Payload)
}
ValidateChecksum()

ValidateChecksum returns true if the Checksum of data matches the expected checksum.

func ValidateChecksum(data, expected []byte) bool {
	digest := Checksum(data)
	return digest == expected
}

Test Vectors

These are a set of test vectors that can be used to test an implementation of this address spec. Test vectors are presented as newline-delimited address/hex fields. The ‘address’ field, when parsed, should produce raw bytes that match the corresponding item in the ‘hex’ field. For example:

address1
hex1

address2
hex2

ID Type Addresses

f00
0000

f0150
009601

f01024
008008

f01729
00c10d

f018446744073709551615
00ffffffffffffffffff01

Secp256k1 Type Addresses

f17uoq6tp427uzv7fztkbsnn64iwotfrristwpryy
01fd1d0f4dfcd7e99afcb99a8326b7dc459d32c628

f1xcbgdhkgkwht3hrrnui3jdopeejsoatkzmoltqy
01b882619d46558f3d9e316d11b48dcf211327026a

f1xtwapqc6nh4si2hcwpr3656iotzmlwumogqbuaa
01bcec07c05e69f92468e2b3e3bf77c874f2c5da8c

f1wbxhu3ypkuo6eyp6hjx6davuelxaxrvwb2kuwva
01b06e7a6f0f551de261fe3a6fe182b422ee0bc6b6

f12fiakbhe2gwd5cnmrenekasyn6v5tnaxaqizq6a
01d1500504e4d1ac3e89ac891a4502586fabd9b417

Actor Type Addresses

f24vg6ut43yw2h2jqydgbg2xq7x6f4kub3bg6as6i
02e54dea4f9bc5b47d261819826d5e1fbf8bc5503b

f25nml2cfbljvn4goqtclhifepvfnicv6g7mfmmvq
02eb58bd08a15a6ade19d0989674148fa95a8157c6

f2nuqrg7vuysaue2pistjjnt3fadsdzvyuatqtfei
026d21137eb4c4814269e894d296cf6500e43cd714

f24dd4ox4c2vpf5vk5wkadgyyn6qtuvgcpxxon64a
02e0c7c75f82d55e5ed55db28033630df4274a984f

f2gfvuyh7v2sx3patm5k23wdzmhyhtmqctasbr23y
02316b4c1ff5d4afb7826ceab5bb0f2c3e0f364053

BLS Type Addresses

To aid in readability, these addresses are line-wrapped. Address and hex pairs are separated by ---.

f3vvmn62lofvhjd2ugzca6sof2j2ubwok6cj4xxbfzz
4yuxfkgobpihhd2thlanmsh3w2ptld2gqkn2jvlss4a
---
03ad58df696e2d4e91ea86c881e938ba4ea81b395e12
797b84b9cf314b9546705e839c7a99d606b247ddb4f9
ac7a3414dd

f3wmuu6crofhqmm3v4enos73okk2l366ck6yc4owxwb
dtkmpk42ohkqxfitcpa57pjdcftql4tojda2poeruwa
---
03b3294f0a2e29e0c66ebc235d2fedca5697bf784af
605c75af608e6a63d5cd38ea85ca8989e0efde9188b
382f9372460d

f3s2q2hzhkpiknjgmf4zq3ejab2rh62qbndueslmsdz
ervrhapxr7dftie4kpnpdiv2n6tvkr743ndhrsw6d3a
---
0396a1a3e4ea7a14d49985e661b22401d44fed402d1
d0925b243c923589c0fbc7e32cd04e29ed78d15d37d
3aaa3fe6da33

f3q22fijmmlckhl56rn5nkyamkph3mcfu5ed6dheq53
c244hfmnq2i7efdma3cj5voxenwiummf2ajlsbxc65a
---
0386b454258c589475f7d16f5aac018a79f6c1169d2
0fc33921dd8b5ce1cac6c348f90a3603624f6aeb91b
64518c2e8095

f3u5zgwa4ael3vuocgc5mfgygo4yuqocrntuuhcklf4
xzg5tcaqwbyfabxetwtj4tsam3pbhnwghyhijr5mixa
---
03a7726b038022f75a384617585360cee629070a2d9
d28712965e5f26ecc40858382803724ed34f2720336
f09db631f074

Data Structures

RLE+ Bitset Encoding

RLE+ is a lossless compression format based on RLE. Its primary goal is to reduce the size in the case of many individual bits, where RLE breaks down quickly, while keeping the same level of compression for large sets of contiugous bits.

In tests it has shown to be more compact than RLE itself, as well as Concise and Roaring.

Format

The format consists of a header, followed by a series of blocks, of which there are three different types.

The format can be expressed as the following BNF grammar.

    <encoding> ::= <header> <blocks>
      <header> ::= <version> <bit>
     <version> ::= "00"
      <blocks> ::= <block> <blocks> | ""
       <block> ::= <block_single> | <block_short> | <block_long>
<block_single> ::= "1"
 <block_short> ::= "01" <bit> <bit> <bit> <bit>
  <block_long> ::= "00" <unsigned_varint>
         <bit> ::= "0" | "1"

An <unsigned_varint> is defined as specified here.

Blocks

The blocks represent how many bits, of the current bit type there are. As 0 and 1 alternate in a bit vector the inital bit, which is stored in the header, is enough to determine if a length is currently referencing a set of 0s, or 1s.

Block Single

If the running length of the current bit is only 1, it is encoded as a single set bit.

Block Short

If the running length is less than 16, it can be encoded into up to four bits, which a short block represents. The length is encoded into a 4 bits, and prefixed with 01, to indicate a short block.

Block Long

If the running length is 16 or larger, it is encoded into a varint, and then prefixed with 00 to indicate a long block.

Note: The encoding is unique, so no matter which algorithm for encoding is used, it should produce the same encoding, given the same input.

Bit Numbering

For Filecoin, byte arrays representing RLE+ bitstreams are encoded using LSB 0 bit numbering.

HAMT

See the draft IPLD hash map spec for details on implementing the HAMT used for the global state tree map and throughout the actor code.

Other Considerations

  • The maximum size of an Object should be 1MB (2^20 bytes). Objects larger than this are invalid.

Filecoin Parameters

Some of these parameters are used around the code in the Filecoin subsystems and ABI. Others are used as part of the proofs libraries.

Most are generated/finalized using the orient framework. It is used to modelize the Filecoin network.

⚠️ WARNING: Filecoin is not yet launched, and we are finishing protocol spec and implementations. Parameters are set here as placeholders and highly likely to change to fit product and security requirements.

Orient parameters

LAMBDA SPACEGAP BLOCK-SIZE-KIB SECTOR-SIZE-GIB
10 0.03 2.6084006 1024
10 0.03 2.9687543 1024
10 0.03 4.60544 256
10 0.03 6.9628344 256
10 0.03 7.195217 128
10 0.03 12.142387 128
10 0.03 15.2998495 1024
10 0.03 22.186821 32
10 0.03 42.125595 32
10 0.03 55.240646 256
10 0.03 107.03619 128
10 0.03 406.86823 32
10 0.06 2.3094485 1024
10 0.06 2.37085 1024
10 0.06 3.4674127 256
10 0.06 4.686779 256
10 0.06 4.9769444 128
10 0.06 7.705842 128
10 0.06 9.3208065 1024
10 0.06 13.775977 32
10 0.06 25.303907 32
10 0.06 32.48009 256
10 0.06 62.670723 128
10 0.06 238.65137 32
10 0.1 2.1490319 1024
10 0.1 2.1985393 1024
10 0.1 3.0452213 256
10 0.1 3.8423958 256
10 0.1 4.1540065 128
10 0.1 6.059966 128
10 0.1 7.102623 1024
10 0.1 10.6557865 32
10 0.1 19.063526 32
10 0.1 24.036263 256
10 0.1 46.211964 128
10 0.1 176.24756 32
10 0.2 1.9889219 1024
10 0.2 2.1184843 1024
10 0.2 2.7405148 256
10 0.2 3.2329829 256
10 0.2 3.5601068 128
10 0.2 4.8721666 128
10 0.2 5.501524 1024
10 0.2 8.404295 32
10 0.2 14.560543 32
10 0.2 17.942131 256
10 0.2 34.33397 128
10 0.2 131.21773 32
80 0.03 6.5753794 1024
80 0.03 10.902712 1024
80 0.03 19.707468 256
80 0.03 36.63338 128
80 0.03 37.16689 256
80 0.03 71.018715 128
80 0.03 94.63942 1024
80 0.03 133.81236 32
80 0.03 265.37668 32
80 0.03 357.2812 256
80 0.03 695.79944 128
80 0.03 2639.3792 32
80 0.06 4.183762 1024
80 0.06 6.1194773 1024
80 0.06 10.603248 256
80 0.06 18.887196 128
80 0.06 18.958448 256
80 0.06 35.526344 128
80 0.06 46.80707 1024
80 0.06 66.525635 32
80 0.06 130.80322 32
80 0.06 175.19678 256
80 0.06 340.8757 128
80 0.06 1293.6443 32
80 0.1 3.2964888 1024
80 0.1 4.3449306 1024
80 0.1 7.2257156 256
80 0.1 12.203384 256
80 0.1 12.303692 128
80 0.1 22.359337 128
80 0.1 29.061607 1024
80 0.1 41.564106 32
80 0.1 80.880165 32
80 0.1 107.64613 256
80 0.1 209.20566 128
80 0.1 794.4138 32
80 0.2 2.6560488 1024
80 0.2 3.0640512 1024
80 0.2 4.7880635 256
80 0.2 7.32808 256
80 0.2 7.552495 128
80 0.2 12.856943 128
80 0.2 16.252815 1024
80 0.2 23.55217 32
80 0.2 44.856293 32
80 0.2 58.89311 256
80 0.2 114.18173 128
80 0.2 434.17523 32

Audit Reports

Security is a critical component in ensuring Filecoin can fulfill its mission to be the storage network for humanity. In addition to robust secure development processes, trainings, theory audits, and investing in external security research, the Filecoin project has engaged reputable third party auditing specialists to ensure that the theory behind the protocol and its implementation delivers the intended value, enabling Filecoin to be a safe and secure network. This section covers a selection of audit reports that have been published on Filecoin’s theory and implementation.

Filecoin Virtual Machine

2023-03-09 Filecoin EVM (FEVM)

The audit covers the implementation of:

  • FEVM’s builtin actors out of which only actors/evm and actors/eam were included in scope along with code base of ref-fvm. The report included auditing EVM runtime action and implementation, correctness of EVM opcodes, including Ethereum Address Manager(EAM). The report also included issues and enhancements methods for gas model and F4 addresses. The audit team also reviewed the message execution flow and kernel setup, WASM integration and FVM logs. All the valid issues raised by the audit were resolved and acknowledged including a few informational issues. More details on these issues are available in the report.

Lotus

2020-10-20 Lotus Mainnet Ready Security Audit

The scope of this audit covered:

  • The Lotus Daemon: Core component responsible for handling the Blockchain node logic by handling peer- to-peer networking, chain syncing, block validation, data retrieval and transfer, etc.
  • The Lotus Storage Miner: Mining component used to manage a single storage miner by contributing to the network through Sector commitments and Proofs-of-Spacetime data proving it is storing the sectors it has committed to. This component communicates with the Lotus daemon via JSON-RPC API calls.

Venus

2021-06-29 Venus Security Audit

The scope of this audit covered:

  • The Venus Daemon: Core component responsible for handling the Filecoin node logic by handling peer-to-peer networking, chain syncing, block validation, etc.

Actors

2020-10-19 Actors Mainnet Ready Security Audit

This audit covers the implementation of Filecoin’s builtin Actors, focusing on the role of Actors as a core component in the business logic of the Filecoin storage network. The audit process involved a manual review of the Actors code and conducting ongoing reviews of changes to the code during the course of the engagement. Issues uncovered through this process are all tracked in the GitHub repository. All Priority 1 issues have been resolved. Most Priority 2 issues have been resolved - ones that are still open have been determined to not be a risk for the Filecoin network or miner experience. Further details on these and all other issues raised available in the report.

Proofs

2020-10-20 Filecoin Bellman and BLS Signatures

This audit covers the core cryptographic primitives used by the Filecoin Proving subsystem, including BLS signatures, cryptographic arithmetic, pairings, and zk-SNARK operations. The scope of the audit included several repositories (most code is written in rust) - bls-signatures, Filecoin’s bellman, ff, group, paired, and rush-sha2ni.The audit uncovered 1 medium severity issue which has been fixed, and a few other low severity/informational issues (the details of all issues raised and their status at time of publishing are available in the report).

2020-07-28 Filecoin Proving Subsystem

This audit covers the full Proving subsystem, including rust-fil-proofs and filecoin-ffi, through which Proof of Space-Time (PoSt), Proof of Retrievability (PoR), and Proof of Replication (PoRep) are implemented. The audit process included using fuzzing to identify potential vulnerabilities in the subsystem, each of which was resolved (the details of all issues raised and their resolutions are available in the report).

2020-07-28 zk-SNARK proofs

This audit covers the core logic and implementation of the zk-SNARK tree-based proofs-of-replication (including the fork of bellman), as well as the SNARK circuits creation. All issues raised by the audit were resolved.

GossipSub

2020-06-03 GossipSub Design and Implementation

This audit focused specifically on GossipSub, a pubsub protocol built on libp2p, version 1.1, which includes a peer scoring layer to mitigate certain types of attacks that could compromise a network. The audit covered the spec, go-libp2p-pubsub and gossipsub-hardening. The report found 4 issues, primarily in the Peer Scoring that was introduced in v1.1, and includes additional suggestions. All the issues raised in the report have been resolved, and additional details are available in the report linked above.

2020-04-18 GossipSub Evaluation

This evaluation focused on demonstrating that GossipSub is resilient against a range of attacks, capable of recovering the mesh, and can meet the message delivery requirements for Filecoin. Attacks used in testing include the Sybil, Eclipse, Degredation, Censorship, Attack at Dawn, “Cover Flash”, and “Cold Boot” attacks. The spec for v1.1, v1.0 and the reference implementation were in scope for this audit.

Drand

2020-08-09 drand reference implementation Security Audit

This report covers the end-to-end audit carried out on drand, including the implementations found in drand/drand, drand/bls12-381 and drand/kyber. The audit assessed drand’s ability to securely provide a distributed, continuous source of entropy / randomness for Filecoin, and included using fuzzing to find potential leaks, errors, or other panics. A handful of issues were found, 14 of which were marked as issues ranging from low to high risk, all of which have been resolved (the details of all issues raised and their resolutions are available in the report).

Filecoin Implementations

Filecoin is targeting multiple implementations of the protocol in order to guarantee security but also resilience of the Filecoin network. There are currently four active implementation efforts:

Repo Language CI Test Coverage Security Audit
lotus go Failed 40% Reports
go-fil-markets go Passed 58% Reports
specs-actors go Unknown 69% Reports
rust Unknown Unknown Reports
venus go Unknown 24% Missing
forest rust Passed 55% Missing
cpp-filecoin c++ Passed 45% Missing

Lotus

Lotus is an implementation of the Filecoin Distributed Storage Network. Lotus is written in Go and it is designed to be modular and interoperable with other implementations of Filecoin.

You can run the Lotus software client to join the Filecoin Testnet. Lotus can run on MacOS and Linux. Windows is not supported yet.

The two main components of Lotus are:

  1. The Lotus Node can sync the blockchain, validating all blocks, transfers, and deals along the way. It can also facilitate the creation of new storage deals. Running this type of node is ideal for users that do not wish to contribute storage to the network, produce new blocks and extend the blockchain.
  2. The Lotus Storage Miner can register as a miner in the network, register storage, accept deals and store data. The Lotus Storage Miner can produce blocks, extend the blockchain and receive rewards for new blocks added to the network.

You can find the Lotus codebase here and further documentation, how-to guides and a list of FAQs in at lotu.sh.

The Lotus implementation of Filecoin is supported by Protocol Labs.

Venus

Venus, previously called go-filecoin, is another implementation of Filecoin in Go and is maintained by the IPFS Force Community. The go-filecoin implementation, before it was renamed to Venus and taken over by IPFS Force, was already nearly feature-complete (as of June 2020) with go-filecoin nodes interoperating with Lotus nodes.

Protocol Labs is offering DevGrants to support the development of Venus.

You can find the Venus codebase here and its extensive documentation website here.

Forest

Forest is an implementation of Filecoin written in Rust. The implementation will take a modular approach to building a full Filecoin node in two parts — (i) building Filecoin’s security critical systems in Rust from the Filecoin Protocol Specification, specifically the virtual machine, blockchain, and node system, and (ii) integrating functional components for storage mining and storage & retrieval markets to compose a fully functional Filecoin node implementation.

You can find the Forest codebase here and the documentation site here.

The Forest implementation of Filecoin is supported by ChainSafe.

Fuhon (cpp-filecoin)

Fuhon is the C++ implementation of Filecoin. The implementation uses Rust libraries for BLS, so Rust is needed in order to build successfully.

You can find the Fuhon codebase here.

The Fuhon implementation of Filecoin is supported by Soramitsu.

Since May 2022, this implementation has been deprecated and out of support. The existing code repositories will remain public.

Releases