Introduction
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro
Filecoin is a distributed storage network based on a blockchain mechanism. Filecoin miners can elect to provide storage capacity for the network, and thereby earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify that they are providing the capacity specified. In addition, Filecoin enables parties to exchange FIL currency through transactions recorded in a shared ledger on the Filecoin blockchain. Rather than using Nakamoto-style proof of work to maintain consensus on the chain, however, Filecoin uses proof of storage itself: a miner’s power in the consensus protocol is proportional to the amount of storage it provides.
The Filecoin blockchain not only maintains the ledger for FIL transactions and accounts, but also implements the Filecoin VM, a replicated state machine which executes a variety of cryptographic contracts and market mechanisms among participants on the network. These contracts include storage deals, in which clients pay FIL currency to miners in exchange for storing the specific file data that the clients request. Via the distributed implementation of the Filecoin VM, storage deals and other contract mechanisms recorded on the chain continue to be processed over time, without requiring further interaction from the original parties (such as the clients who requested the data storage).
Spec Status
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.spec-status
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.spec-status
Each section of the spec must be stable and audited before it is considered done. The state of each section is tracked below.
- The State column indicates the stability as defined in the legend.
- The Theory Audit column shows the date of the last theory audit with a link to the report.
Spec Status Legend
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.spec-status-legend
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.spec-status-legend
Spec state | Label |
---|---|
Unlikely to change in the foreseeable future. | Stable |
All content is correct. Important details are covered. | Reliable |
All content is correct. Details are being worked on. | Draft/WIP |
Do not follow. Important things have changed. | Incorrect |
No work has been done yet. | Missing |
Spec Status Overview
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.spec-status-overview
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.spec-status-overview
Section | State | Theory Audit |
---|---|---|
1 Introduction | Reliable | |
1.2 Architecture Diagrams | Reliable | |
1.3 Key Concepts | Reliable | |
1.4 Filecoin VM | Reliable | |
1.5 System Decomposition | Reliable | |
1.5.1 What are Systems? How do they work? | Reliable | |
1.5.2 Implementing Systems | Reliable | |
2 Systems | Draft/WIP | |
2.1 Filecoin Nodes | Reliable | |
2.1.1 Node Types | Stable | |
2.1.2 Node Repository | Stable | |
2.1.2.1 Key Store | Reliable | |
2.1.2.2 IPLD Store | Stable | Draft/WIP |
2.1.3 Network Interface | Stable | |
2.1.4 Clock | Reliable | |
2.2 Files & Data | Reliable | |
2.2.1 File | Reliable | |
2.2.1.1 FileStore - Local Storage for Files | Reliable | |
2.2.2 The Filecoin Piece | Stable | |
2.2.3 Data Transfer in Filecoin | Stable | |
2.2.4 Data Formats and Serialization | Reliable | |
2.3 Virtual Machine | Reliable | |
2.3.1 VM Actor Interface | Reliable | Draft/WIP |
2.3.2 State Tree | Reliable | Draft/WIP |
2.3.3 VM Message - Actor Method Invocation | Reliable | Draft/WIP |
2.3.4 VM Runtime Environment (Inside the VM) | Reliable | |
2.3.5 Gas Fees | Reliable | Report Coming Soon |
2.3.6 System Actors | Reliable | Reports |
2.3.7 VM Interpreter - Message Invocation (Outside VM) | Draft/WIP | Draft/WIP |
2.4 Blockchain | Reliable | Draft/WIP |
2.4.1 Blocks | Reliable | |
2.4.1.1 Block | Reliable | |
2.4.1.2 Tipset | Reliable | |
2.4.1.3 Chain Manager | Reliable | |
2.4.1.4 Block Producer | Reliable | Draft/WIP |
2.4.2 Message Pool | Stable | Draft/WIP |
2.4.2.1 Message Propagation | Stable | |
2.4.2.2 Message Storage | Stable | |
2.4.3 ChainSync | Stable | |
2.4.4 Storage Power Consensus | Reliable | Draft/WIP |
2.4.4.6 Storage Power Actor | Reliable | Draft/WIP |
2.5 Token | Reliable | |
2.5.1 Minting Model | Reliable | |
2.5.2 Block Reward Minting | Reliable | |
2.5.3 Token Allocation | Reliable | |
2.5.4 Payment Channels | Stable | Draft/WIP |
2.5.5 Multisig Wallet & Actor | Reliable | Reports |
2.6 Storage Mining | Reliable | Draft/WIP |
2.6.1 Sector | Stable | |
2.6.1.1 Sector Lifecycle | Stable | |
2.6.1.2 Sector Quality | Stable | |
2.6.1.3 Sector Sealing | Stable | Draft/WIP |
2.6.1.4 Sector Faults | Stable | Draft/WIP |
2.6.1.5 Sector Recovery | Reliable | Draft/WIP |
2.6.1.6 Adding Storage | Stable | Draft/WIP |
2.6.1.7 Upgrading Sectors | Stable | Draft/WIP |
2.6.2 Storage Miner | Reliable | Draft/WIP |
2.6.2.4 Storage Mining Cycle | Reliable | Draft/WIP |
2.6.2.5 Storage Miner Actor | Draft/WIP | Reports |
2.6.3 Miner Collaterals | Reliable | |
2.6.4 Storage Proving | Draft/WIP | Draft/WIP |
2.6.4.2 Sector Poster | Draft/WIP | Draft/WIP |
2.6.4.3 Sector Sealer | Draft/WIP | Draft/WIP |
2.7 Markets | Stable | |
2.7.1 Storage Market in Filecoin | Stable | Draft/WIP |
2.7.2 Storage Market On-Chain Components | Reliable | Draft/WIP |
2.7.2.3 Storage Market Actor | Reliable | Reports |
2.7.2.4 Storage Deal Flow | Reliable | Draft/WIP |
2.7.2.5 Storage Deal States | Reliable | |
2.7.2.6 Faults | Reliable | Draft/WIP |
2.7.3 Retrieval Market in Filecoin | Stable | |
2.7.3.5 Retrieval Peer Resolver | Stable | |
2.7.3.6 Retrieval Protocols | Stable | |
2.7.3.7 Retrieval Client | Stable | |
2.7.3.8 Retrieval Provider (Miner) | Stable | |
2.7.3.9 Retrieval Deal Status | Stable | |
3 Libraries | Reliable | |
3.1 DRAND | Stable | Reports |
3.2 IPFS | Stable | Draft/WIP |
3.3 Multiformats | Stable | |
3.4 IPLD | Stable | |
3.5 Libp2p | Stable | Draft/WIP |
4 Algorithms | Draft/WIP | |
4.1 Expected Consensus | Reliable | Draft/WIP |
4.2 Proof-of-Storage | Reliable | Draft/WIP |
4.2.2 Proof-of-Replication (PoRep) | Reliable | Draft/WIP |
4.2.3 Proof-of-Spacetime (PoSt) | Reliable | Draft/WIP |
4.3 Stacked DRG Proof of Replication | Stable | Report Coming Soon |
4.3.16 SDR Notation, Constants, and Types | Stable | Report Coming Soon |
4.4 BlockSync | Stable | |
4.5 GossipSub | Stable | Reports |
4.6 Cryptographic Primitives | Draft/WIP | |
4.6.1 Signatures | Draft/WIP | Report Coming Soon |
4.6.2 Verifiable Random Function | Incorrect | |
4.6.3 Randomness | Reliable | Draft/WIP |
4.6.4 Poseidon | Incorrect | Missing |
4.7 Verified Clients | Draft/WIP | Draft/WIP |
4.8 Filecoin CryptoEconomics | Reliable | Draft/WIP |
5 Glossary | Reliable | |
6 Appendix | Draft/WIP | |
6.1 Filecoin Address | Reliable | |
6.2 Data Structures | Reliable | |
6.3 Filecoin Parameters | Draft/WIP | |
6.4 Audit Reports | Reliable | |
7 Filecoin Implementations | Reliable | |
7.1 Lotus | Reliable | |
7.2 Venus | Reliable | |
7.3 Forest | Reliable | |
7.4 Fuhon (cpp-filecoin) | Reliable | |
8 Releases |
Spec Stabilization Progress
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.spec-stabilization-progress
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.spec-stabilization-progress
This progress bar shows what percentage of the spec sections are considered stable.
Implementations Status
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.implementations-status
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.implementations-status
Known implementations of the filecoin spec are tracked below, with their current CI build status, their test coverage as reported by codecov.io, and a link to their last security audit report where one exists.
Repo | Language | CI | Test Coverage | Security Audit |
---|---|---|---|---|
lotus | go | Failed | 40% | Reports |
go-fil-markets | go | Passed | 58% | Reports |
specs-actors | go | Unknown | 69% | Reports |
rust | Unknown | Unknown | Reports | |
venus | go | Passed | 23% | Missing |
forest | rust | Passed | 56% | Missing |
cpp-filecoin | c++ | Passed | 45% | Missing |
Architecture Diagrams
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.arch
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.arch
Actor State Diagram
Key Concepts
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.concepts
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.concepts
For clarity, we refer the following types of entities to describe implementations of the Filecoin protocol:
-
Data structures are collections of semantically-tagged data members (e.g., structs, interfaces, or enums).
-
Functions are computational procedures that do not depend on external state (i.e., mathematical functions, or programming language functions that do not refer to global variables).
-
Components are sets of functionality that are intended to be represented as single software units in the implementation structure. Depending on the choice of language and the particular component, this might correspond to a single software module, a thread or process running some main loop, a disk-backed database, or a variety of other design choices. For example, the ChainSync is a component: it could be implemented as a process or thread running a single specified main loop, which waits for network messages and responds accordingly by recording and/or forwarding block data.
-
APIs are the interfaces for delivering messages to components. A client’s view of a given sub-protocol, such as a request to a miner node’s Storage Provider component to store files in the storage market, may require the execution of a series of API requests.
-
Nodes are complete software and hardware systems that interact with the protocol. A node might be constantly running several of the above components, participating in several subsystems, and exposing APIs locally and/or over the network, depending on the node configuration. The term full node refers to a system that runs all of the above components and supports all of the APIs detailed in the spec.
-
Subsystems are conceptual divisions of the entire Filecoin protocol, either in terms of complete protocols (such as the Storage Market or Retrieval Market), or in terms of functionality (such as the VM - Virtual Machine). They do not necessarily correspond to any particular node or software component.
-
Actors are virtual entities embodied in the state of the Filecoin VM. Protocol actors are analogous to participants in smart contracts; an actor carries a FIL currency balance and can interact with other actors via the operations of the VM, but does not necessarily correspond to any particular node or software component.
Filecoin VM
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.filecoin_vm
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.filecoin_vm
The majority of Filecoin’s user facing functionality (payments, storage market, power table, etc) is managed through the Filecoin Virtual Machine (Filecoin VM). The network generates a series of blocks, and agrees which ‘chain’ of blocks is the correct one. Each block contains a series of state transitions called messages
, and a checkpoint of the current global state
after the application of those messages
.
The global state
here consists of a set of actors
, each with their own private state
.
An actor
is the Filecoin equivalent of Ethereum’s smart contracts, it is essentially an ‘object’ in the filecoin network with state and a set of methods that can be used to interact with it. Every actor has a Filecoin balance attributed to it, a state
pointer, a code
CID which tells the system what type of actor it is, and a nonce
which tracks the number of messages sent by this actor.
There are two routes to calling a method on an actor
. First, to call a method as an external participant of the system (aka, a normal user with Filecoin) you must send a signed message
to the network, and pay a fee to the miner that includes your message
. The signature on the message must match the key associated with an account with sufficient Filecoin to pay for the message’s execution. The fee here is equivalent to transaction fees in Bitcoin and Ethereum, where it is proportional to the work that is done to process the message (Bitcoin prices messages per byte, Ethereum uses the concept of ‘gas’. We also use ‘gas’).
Second, an actor
may call a method on another actor during the invocation of one of its methods. However, the only time this may happen is as a result of some actor being invoked by an external users message (note: an actor called by a user may call another actor that then calls another actor, as many layers deep as the execution can afford to run for).
For full implementation details, see the VM Subsystem.
System Decomposition
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems
What are Systems? How do they work?
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.why_systems
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.why_systems
Filecoin decouples and modularizes functionality into loosely-joined systems
.
Each system adds significant functionality, usually to achieve a set of important and tightly related goals.
For example, the Blockchain System provides structures like Block, Tipset, and Chain, and provides functionality like Block Sync, Block Propagation, Block Validation, Chain Selection, and Chain Access. This is separated from the Files, Pieces, Piece Preparation, and Data Transfer. Both of these systems are separated from the Markets, which provide Orders, Deals, Market Visibility, and Deal Settlement.
Why is System decoupling useful?
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.why_systems.why-is-system-decoupling-useful
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.why_systems.why-is-system-decoupling-useful
This decoupling is useful for:
- Implementation Boundaries: it is possible to build implementations of Filecoin that only implement a subset of systems. This is especially useful for Implementation Diversity: we want many implementations of security critical systems (eg Blockchain), but do not need many implementations of Systems that can be decoupled.
- Runtime Decoupling: system decoupling makes it easier to build and run Filecoin Nodes that isolate Systems into separate programs, and even separate physical computers.
- Security Isolation: some systems require higher operational security than others. System decoupling allows implementations to meet their security and functionality needs. A good example of this is separating Blockchain processing from Data Transfer.
- Scalability: systems, and various use cases, may drive different performance requirements for different operators. System decoupling makes it easier for operators to scale their deployments along system boundaries.
Filecoin Nodes don’t need all the systems
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.why_systems.filecoin-nodes-dont-need-all-the-systems
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.why_systems.filecoin-nodes-dont-need-all-the-systems
Filecoin Nodes vary significantly and do not need all the systems. Most systems are only needed for a subset of use cases.
For example, the Blockchain System is required for synchronizing the chain, participating in secure consensus, storage mining, and chain validation. Many Filecoin Nodes do not need the chain and can perform their work by just fetching content from the latest StateTree, from a node they trust.
Note: Filecoin does not use the “full node” or “light client” terminology, in wide use in Bitcoin and other blockchain networks. In filecoin, these terms are not well defined. It is best to define nodes in terms of their capabilities, and therefore, in terms of the Systems they run. For example:
- Chain Verifier Node: Runs the Blockchain system. Can sync and validate the chain. Cannot mine or produce blocks.
- Client Node: Runs the Blockchain, Market, and Data Transfer systems. Can sync and validate the chain. Cannot mine or produce blocks.
- Retrieval Miner Node: Runs the Market and Data Transfer systems. Does not need the chain. Can make Retrieval Deals (Retrieval Provider side). Can send Clients data, and get paid for it.
- Storage Miner Node: Runs the Blockchain, Storage Market, Storage Mining systems. Can sync and validate the chain. Can make Storage Deals (Storage Provider side). Can seal stored data into sectors. Can acquire storage consensus power. Can mine and produce blocks.
Separating Systems
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.why_systems.separating-systems
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.why_systems.separating-systems
How do we determine what functionality belongs in one system vs another?
Drawing boundaries between systems is the art of separating tightly related functionality from unrelated parts. In a sense, we seek to keep tightly integrated components in the same system, and away from other unrelated components. This is sometimes straightforward, the boundaries naturally spring from the data structures or functionality. For example, it is straightforward to observe that Clients and Miners negotiating a deal with each other is very unrelated to VM Execution.
Sometimes this is harder, and it requires detangling, adding, or removing abstractions. For
example, the StoragePowerActor
and the StorageMarketActor
were a single Actor
previously. This caused
a large coupling of functionality across StorageDeal
making, the StorageMarket
, markets in general, with
Storage Mining, Sector Sealing, PoSt Generation, and more. Detangling these two sets of related functionality
required breaking apart the one actor into two.
Decomposing within a System
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.why_systems.decomposing-within-a-system
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.why_systems.decomposing-within-a-system
Systems themselves decompose into smaller subunits. These are sometimes called “subsystems” to avoid confusion with the much larger, first-class Systems. Subsystems themselves may break down further. The naming here is not strictly enforced, as these subdivisions are more related to protocol and implementation engineering concerns than to user capabilities.
Implementing Systems
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.impl_systems
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.impl_systems
System Requirements
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.impl_systems.system-requirements
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.impl_systems.system-requirements
In order to make it easier to decouple functionality into systems, the Filecoin Protocol assumes a set of functionality available to all systems. This functionality can be achieved by implementations in a variety of ways, and should take the guidance here as a recommendation (SHOULD).
All Systems, as defined in this document, require the following:
- Repository:
- Local
IpldStore
. Some amount of persistent local storage for data structures (small structured objects). Systems expect to be initialized with an IpldStore in which to store data structures they expect to persist across crashes. - User Configuration Values. A small amount of user-editable configuration values. These should be easy for end-users to access, view, and edit.
- Local, Secure
KeyStore
. A facility to use to generate and use cryptographic keys, which MUST remain secret to the Filecoin Node. Systems SHOULD NOT access the keys directly, and should do so over an abstraction (ie theKeyStore
) which provides the ability to Encrypt, Decrypt, Sign, SigVerify, and more.
- Local
- Local
FileStore
. Some amount of persistent local storage for files (large byte arrays). Systems expect to be initialized with a FileStore in which to store large files. Some systems (like Markets) may need to store and delete large volumes of smaller files (1MB - 10GB). Other systems (like Storage Mining) may need to store and delete large volumes of large files (1GB - 1TB). - Network. Most systems need access to the network, to be able to connect to their counterparts in other Filecoin Nodes.
Systems expect to be initialized with a
libp2p.Node
on which they can mount their own protocols. - Clock. Some systems need access to current network time, some with low tolerance for drift. Systems expect to be initialized with a Clock from which to tell network time. Some systems (like Blockchain) require very little clock drift, and require secure time.
For this purpose, we use the FilecoinNode
data structure, which is passed into all systems at initialization.
System Limitations
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-intro.systems.impl_systems.system-limitations
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-intro.systems.impl_systems.system-limitations
Further, Systems MUST abide by the following limitations:
- Random crashes. A Filecoin Node may crash at any moment. Systems must be secure and consistent through crashes. This is primarily achieved by limiting the use of persistent state, persisting such state through Ipld data structures, and through the use of initialization routines that check state, and perhaps correct errors.
- Isolation. Systems must communicate over well-defined, isolated interfaces. They must not build their critical functionality over a shared memory space. (Note: for performance, shared memory abstractions can be used to power IpldStore, FileStore, and libp2p, but the systems themselves should not require it.) This is not just an operational concern; it also significantly simplifies the protocol and makes it easier to understand, analyze, debug, and change.
- No direct access to host OS Filesystem or Disk. Systems cannot access disks directly – they do so over the FileStore and IpldStore abstractions. This is to provide a high degree of portability and flexibility for end-users, especially storage miners and clients of large amounts of data, which need to be able to easily replace how their Filecoin Nodes access local storage.
- No direct access to host OS Network stack or TCP/IP. Systems cannot access the network directly – they do so over the libp2p library. There must not be any other kind of network access. This provides a high degree of portability across platforms and network protocols, enabling Filecoin Nodes (and all their critical systems) to run in a wide variety of settings, using all kinds of protocols (eg Bluetooth, LANs, etc).
Systems
-
State
wip
-
Theory Audit
n/a
-
Edit this section
-
section-systems
-
State
wip
-
Theory Audit
n/a
- Edit this section
-
section-systems
In this section we are detailing all the system components one by one in increasing level of complexity and/or interdependence to other system components. The interaction of the components between each other is only briefly discussed where appropriate, but the overall workflow is given in the Introduction section. In particular, in this section we discuss:
- Filecoin Nodes: the different types of nodes that participate in the Filecoin Network, as well as important parts and processes that these nodes run, such as the key store and IPLD store, as well as the network interface to libp2p.
- Files & Data: the data units of Filecoin, such as the Sectors and the Pieces.
- Virtual Machine: the subcomponents of the Filecoin VM, such as the actors, i.e., the smart contracts that run on the Filecoin Blockchain, and the State Tree.
- Blockchain: the main building blocks of the Filecoin blockchain, such as the structure of messages and blocks, the message pool, as well as how nodes synchronise the blockchain when they first join the network.
- Token: the components needed for a wallet.
- Storage Mining: the details of storage mining, storage power consensus, and how storage miners prove storage (without going into details of proofs, which are discussed later).
- Markets: the storage and retrieval markets, which are primarily processes that take place off-chain, but are very important for the smooth operation of the decentralised storage market.
Filecoin Nodes
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes
This section starts by discussing the concept of Filecoin Nodes. Although different node types in the Lotus implementation of Filecoin are less strictly defined than in other blockchain networks, there are different properties and features that different types of nodes should implement. In short, nodes are defined based on the set of services they provide.
In this section we also discuss issues related to storage of system files in Filecoin nodes. Note that by storage in this section we do not refer to the storage that a node commits for mining in the network, but rather the local storage repositories that it needs to have available for keys and IPLD data among other things.
In this section we are also discussing the network interface and how nodes find and connect with each other, how they interact and propagate messages using libp2p, as well as how to set the node’s clock.
Node Types
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types
Nodes in the Filecoin network are primarily identified in terms of the services they provide. The type of node, therefore, depends on which services a node provides. A basic set of services in the Filecoin network include:
- chain verification
- storage market client
- storage market provider
- retrieval market client
- retrieval market provider
- storage mining
Any node participating in the Filecoin network should provide the chain verification service as a minimum. Depending on which extra services a node provides on top of chain verification, it gets the corresponding functionality and Node Type “label”.
Nodes can be realized with a repository (directory) in the host in a one-to-one relationship - that is, one repo belongs to a single node. That said, one host can implement multiple Filecoin nodes by having the corresponding repositories.
A Filecoin implementation can support the following subsystems, or types of nodes:
- Chain Verifier Node: this is the minimum functionality that a node needs to have in order to participate in the Filecoin network. This type of node cannot play an active role in the network, unless it implements Client Node functionality, described below. A Chain Verifier Node must synchronise the chain (ChainSync) when it first joins the network to reach current consensus. From then on, the node must constantly be fetching any addition to the chain (i.e., receive the latest blocks) and validate them to reach consensus state.
- Client Node: this type of node builds on top of the Chain Verifier Node and must be implemented by any application that is building on the Filecoin network. This can be thought of as the main infrastructure node (at least as far as interaction with the blockchain is concerned) of applications such as exchanges or decentralised storage applications building on Filecoin. The node should implement the storage market and retrieval market client services. The client node should interact with the Storage and Retrieval Markets and be able to do Data Transfers through the Data Transfer Module.
- Retrieval Miner Node: this node type is extending the Chain Verifier Node to add retrieval miner functionality, that is, participate in the retrieval market. As such, this node type needs to implement the retrieval market provider service and be able to do Data Transfers through the Data Transfer Module.
- Storage Miner Node: this type of node must implement all of the required functionality for validating, creating and adding blocks to extend the blockchain. It should implement the chain verification, storage mining and storage market provider services and be able to do Data Transfers through the Data Transfer Module.
Node Interface
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types.node-interface
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types.node-interface
The Lotus implementation of the Node Interface can be found here.
Chain Verifier Node
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types.chain-verifier-node
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types.chain-verifier-node
type ChainVerifierNode interface {
FilecoinNode
systems.Blockchain
}
The Lotus implementation of the Chain Verifier Node can be found here.
Client Node
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types.client-node
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types.client-node
type ClientNode struct {
FilecoinNode
systems.Blockchain
markets.StorageMarketClient
markets.RetrievalMarketClient
markets.DataTransfers
}
The Lotus implementation of the Client Node can be found here.
Storage Miner Node
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types.storage-miner-node
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types.storage-miner-node
type StorageMinerNode interface {
FilecoinNode
systems.Blockchain
systems.Mining
markets.StorageMarketProvider
markets.DataTransfers
}
The Lotus implementation of the Storage Miner Node can be found here.
Retrieval Miner Node
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types.retrieval-miner-node
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types.retrieval-miner-node
type RetrievalMinerNode interface {
FilecoinNode
blockchain.Blockchain
markets.RetrievalMarketProvider
markets.DataTransfers
}
Relayer Node
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types.relayer-node
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types.relayer-node
type RelayerNode interface {
FilecoinNode
blockchain.MessagePool
}
Node Configuration
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.node_types.node-configuration
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.node_types.node-configuration
The Lotus implementation of Filecoin Node configuration values can be found here.
Node Repository
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.repository
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.repository
The Filecoin node repository is simply local storage for system and chain data. It is an abstraction of the data which any functional Filecoin node needs to store locally in order to run correctly.
The repository is accessible to the node’s systems and subsystems and can be compartmentalized from the node’s FileStore
.
The repository stores the node’s keys, the IPLD data structures of stateful objects as well as the node configuration settings.
The Lotus implementation of the FileStore Repository can be found here.
Key Store
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.repository.key_store
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.repository.key_store
The Key Store
is a fundamental abstraction in any full Filecoin node used to store the keypairs associated with a given miner’s address (see actual definition further down) and distinct workers (should the miner choose to run multiple workers).
Node security depends in large part on keeping these keys secure. To that end we strongly recommend: 1) keeping keys separate from all subsystems, 2) using a separate key store to sign requests as required by other subsystems, and 3) keeping those keys that are not used as part of mining in cold storage.
Filecoin storage miners rely on three main components:
- The storage miner actor address is uniquely assigned to a given storage miner actor upon calling
registerMiner()
in the Storage Power Consensus Subsystem. In effect, the storage miner does not have an address itself, but is rather identified by the address of the actor it is tied to. This is a unique identifier for a given storage miner to which its power and other keys will be associated. Theactor value
specifies the address of an already created miner actor. - The owner keypair is provided by the miner ahead of registration and its public key associated with the miner address. The owner keypair can be used to administer a miner and withdraw funds.
- The worker keypair is the public key associated with the storage miner actor address. It can be chosen and changed by the miner. The worker keypair is used to sign blocks and may also be used to sign other messages. It must be a BLS keypair given its use as part of the Verifiable Random Function.
Multiple storage miner actors can share one owner public key or likewise a worker public key.
The process for changing the worker keypairs on-chain (i.e. the worker Key associated with a storage miner actor) is specified in Storage Miner Actor. Note that this is a two-step process. First, a miner stages a change by sending a message to the chain. Then, the miner confirms the key change after the randomness lookback time. Finally, the miner will begin signing blocks with the new key after an additional randomness lookback time. This delay exists to prevent adaptive key selection attacks.
Key security is of utmost importance in Filecoin, as is also the case with keys in every blockchain. Failure to securely store and use keys or exposure of private keys to adversaries can result in the adversary having access to the miner’s funds.
IPLD Store
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore
InterPlanetary Linked Data (IPLD) is a set of libraries which allow for the interoperability of content-addressed data structures across different distributed systems and protocols. It provides a fundamental ‘common language’ to primitive cryptographic hashing, enabling data structures to be verifiably referenced and retrieved between two independent protocols. For example, a user can reference an IPFS directory in an Ethereum transaction or smart contract.
The IPLD Store of a Filecoin Node is local storage for hash-linked data.
IPLD is fundamentally comprised of three layers:
- the Block Layer, which focuses on block formats and addressing, how blocks can advertise or self-describe their codec
- the Data Model Layer, which defines a set of required types that need to be included in any implementation - discussed in more detail below.
- the Schema Layer, which allows for extension of the Data Model to interact with more complex structures without the need for custom translation abstractions.
Further details about IPLD can be found in its specification.
The Data Model
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore.the-data-model
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore.the-data-model
At its core, IPLD defines a Data Model for representing data. The Data Model is designed for practical implementation across a wide variety of programming languages, while maintaining usability for content-addressed data and a broad range of generalized tools that interact with that data.
The Data Model includes a range of standard primitive types (or “kinds”), such as booleans, integers, strings, nulls and byte arrays, as well as two recursive types: lists and maps. Because IPLD is designed for content-addressed data, it also includes a “link” primitive in its Data Model. In practice, links use the CID specification. IPLD data is organized into “blocks”, where a block is represented by the raw, encoded data and its content-address, or CID. Every content-addressable chunk of data can be represented as a block, and together, blocks can form a coherent graph, or Merkle DAG.
Applications interact with IPLD via the Data Model, and IPLD handles marshalling and unmarshalling via a suite of codecs. IPLD codecs may support the complete Data Model or part of the Data Model. Two codecs that support the complete Data Model are DAG-CBOR and DAG-JSON. These codecs are respectively based on the CBOR and JSON serialization formats but include formalizations that allow them to encapsulate the IPLD Data Model (including its link type) and additional rules that create a strict mapping between any set of data and it’s respective content address (or hash digest). These rules include the mandating of particular ordering of keys when encoding maps, or the sizing of integer types when stored.
IPLD in Filecoin
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore.ipld-in-filecoin
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore.ipld-in-filecoin
IPLD is used in two ways in the Filecoin network:
- All system datastructures are stored using DAG-CBOR (an IPLD codec). DAG-CBOR is a more strict subset of CBOR with a predefined tagging scheme, designed for storage, retrieval and traversal of hash-linked data DAGs. As compared to CBOR, DAG-CBOR can guarantee determinism.
- Files and data stored on the Filecoin network are also stored using various IPLD codecs (not necessarily DAG-CBOR).
IPLD provides a consistent and coherent abstraction above data that allows Filecoin to build and interact with complex, multi-block data structures, such as HAMT and AMT. Filecoin uses the DAG-CBOR codec for the serialization and deserialization of its data structures and interacts with that data using the IPLD Data Model, upon which various tools are built. IPLD Selectors can also be used to address specific nodes within a linked data structure.
IpldStores
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore.ipldstores
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_nodes.repository.ipldstore.ipldstores
The Filecoin network relies primarily on two distinct IPLD GraphStores:
- One
ChainStore
which stores the blockchain, including block headers, associated messages, etc. - One
StateStore
which stores the payload state from a given blockchain, or thestateTree
resulting from all block messages in a given chain being applied to the genesis state by the Filecoin VM.
The ChainStore
is downloaded by a node from their peers during the bootstrapping phase of
Chain Sync and is stored by the node thereafter. It is updated on every new block reception, or if the node syncs to a new best chain.
The StateStore
is computed through the execution of all block messages in a given ChainStore
and is stored by the node thereafter. It is updated with every new incoming block’s processing by the
VM Interpreter, and referenced accordingly by new blocks produced atop it in the
block header’s ParentState
field.
Network Interface
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.network
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.network
Filecoin nodes use several protocols of the libp2p networking stack for peer discovery, peer routing and block and message propagation. Libp2p is a modular networking stack for peer-to-peer networks. It includes several protocols and mechanisms to enable efficient, secure and resilient peer-to-peer communication. Libp2p nodes open connections with one another and mount different protocols or streams over the same connection. In the initial handshake, nodes exchange the protocols that each of them supports and all Filecoin related protocols will be mounted under /fil/...
protocol identifiers.
The complete specification of libp2p can be found at https://github.com/libp2p/specs. Here is the list of libp2p protocols used by Filecoin.
-
Graphsync: Graphsync is a protocol to synchronize graphs across peers. It is used to reference, address, request and transfer blockchain and user data between Filecoin nodes. The draft specification of GraphSync provides more details on the concepts, the interfaces and the network messages used by GraphSync. There are no Filecoin-specific modifications to the protocol id.
-
Gossipsub: Block headers and messages are propagating through the Filecoin network using a gossip-based pubsub protocol acronymed GossipSub. As is traditionally the case with pubsub protocols, nodes subscribe to topics and receive messages published on those topics. When nodes receive messages from a topic they are subscribed to, they run a validation process and i) pass the message to the application, ii) forward the message further to nodes they know of being subscribed to the same topic. Furthermore, v1.1 version of GossipSub, which is the one used in Filecoin is enhanced with security mechanisms that make the protocol resilient against security attacks. The GossipSub Specification provides all the protocol details pertaining to its design and implementation, as well as specific settings for the protocols parameters. There have been no Filecoin-specific modifications to the protocol id. However the topic identifiers MUST be of the form
fil/blocks/<network-name>
andfil/msgs/<network-name>
-
Kademlia DHT: The Kademlia DHT is a distributed hash table with a logarithmic bound on the maximum number of lookups for a particular node. In the Filecoin network, the Kademlia DHT is used primarily for peer discovery and peer routing. In particular, when a node wants to store data in the Filecoin network, they get a list of miners and their node information. This node information includes (among other things) the PeerID of the miner. In order to connect to the miner and exchange data, the node that wants to store data in the network has to find the Multiaddress of the miner, which they do by querying the DHT. The libp2p Kad DHT Specification provides implementation details of the DHT structure. For the Filecoin network, the protocol id must be of the form
fil/<network-name>/kad/1.0.0
. -
Bootstrap List: This is a list of nodes that a new node attempts to connect to upon joining the network. The list of bootstrap nodes and their addresses are defined by the users (i.e., applications).
-
Peer Exchange: This protocol is the realisation of the peer discovery process discussed above at Kademlia DHT. It enables peers to find information and addresses of other peers in the network by interfacing with the DHT and create and issue queries for the peers they want to connect to.
Clock
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.clock
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.clock
Filecoin assumes weak clock synchrony amongst participants in the system. That is, the system relies on participants having access to a globally synchronized clock (tolerating some bounded offset).
Filecoin relies on this system clock in order to secure consensus. Specifically, the clock is necessary to support validation rules that prevent block producers from mining blocks with a future timestamp and running leader elections more frequently than the protocol allows.
Clock uses
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.clock.clock-uses
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.clock.clock-uses
The Filecoin system clock is used:
- by syncing nodes to validate that incoming blocks were mined in the appropriate epoch given their timestamp (see Block Validation). This is possible because the system clock maps all times to a unique epoch number totally determined by the start time in the genesis block.
- by syncing nodes to drop blocks coming from a future epoch
- by mining nodes to maintain protocol liveness by allowing participants to try leader election in the next round if no one has produced a block in the current round (see Storage Power Consensus).
In order to allow miners to do the above, the system clock must:
- Have low enough offset relative to other nodes so that blocks are not mined in epochs considered future epochs from the perspective of other nodes (those blocks should not be validated until the proper epoch/time as per validation rules).
- Set epoch number on node initialization equal to
epoch = Floor[(current_time - genesis_time) / epoch_time]
It is expected that other subsystems will register to a NewRound()
event from the clock subsystem.
Clock Requirements
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_nodes.clock.clock-requirements
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_nodes.clock.clock-requirements
Clocks used as part of the Filecoin protocol should be kept in sync, with offset less than 1 second so as to enable appropriate validation.
Computer-grade crystals can be expected to deviate by 1ppm (i.e. 1 microsecond every second, or 0.6 seconds per week). Therefore, in order to respect the requirement above:
- Nodes SHOULD run an NTP daemon (e.g. timesyncd, ntpd, chronyd) to keep their clocks synchronized to one or more reliable external references.
- Larger mining operations MAY consider using local NTP/PTP servers with GPS references and/or frequency-stable external clocks for improved timekeeping.
Mining operations have a strong incentive to prevent their clock skewing ahead more than one epoch to keep their block submissions from being rejected. Likewise they have an incentive to prevent their clocks skewing behind more than one epoch to avoid partitioning themselves off from the synchronized nodes in the network.
Files & Data
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files
Filecoin’s primary aim is to store client’s Files and Data.
This section details data structures and tooling related to working with files,
chunking, encoding, graph representations, Pieces
, storage abstractions, and more.
File
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.file
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.file
// Path is an opaque locator for a file (e.g. in a unix-style filesystem).
type Path string
// File is a variable length data container.
// The File interface is modeled after a unix-style file, but abstracts the
// underlying storage system.
type File interface {
Path() Path
Size() int
Close() error
// Read reads from File into buf, starting at offset, and for size bytes.
Read(offset int, size int, buf Bytes) struct {size int, e error}
// Write writes from buf into File, starting at offset, and for size bytes.
Write(offset int, size int, buf Bytes) struct {size int, e error}
}
FileStore - Local Storage for Files
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.file.filestore
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.file.filestore
The FileStore
is an abstraction used to refer to any underlying system or device
that Filecoin will store its data to. It is based on Unix filesystem semantics, and
includes the notion of Paths
. This abstraction is here in order to make sure Filecoin
implementations make it easy for end-users to replace the underlying storage system with
whatever suits their needs. The simplest version of FileStore
is just the host operating
system’s file system.
// FileStore is an object that can store and retrieve files by path.
type FileStore struct {
Open(p Path) union {f File, e error}
Create(p Path) union {f File, e error}
Store(p Path, f File) error
Delete(p Path) error
// maybe add:
// Copy(SrcPath, DstPath)
}
Varying user needs
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.file.filestore.varying-user-needs
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.file.filestore.varying-user-needs
Filecoin user needs vary significantly, and many users – especially miners – will implement
complex storage architectures underneath and around Filecoin. The FileStore
abstraction is here
to make it easy for these varying needs to be easy to satisfy. All file and sector local data
storage in the Filecoin Protocol is defined in terms of this FileStore
interface, which makes
it easy for implementations to make swappable, and for end-users to swap out with their system
of choice.
Implementation examples
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.file.filestore.implementation-examples
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.file.filestore.implementation-examples
The FileStore
interface may be implemented by many kinds of backing data storage systems. For example:
- The host Operating System file system
- Any Unix/Posix file system
- RAID-backed file systems
- Networked of distributed file systems (NFS, HDFS, etc)
- IPFS
- Databases
- NAS systems
- Raw serial or block devices
- Raw hard drives (hdd sectors, etc)
Implementations SHOULD implement support for the host OS file system. Implementations MAY implement support for other storage systems.
The Filecoin Piece
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.piece
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.piece
The Filecoin Piece is the main unit of negotiation for data that users store on the Filecoin network. The Filecoin Piece is not a unit of storage, it is not of a specific size, but is upper-bounded by the size of the Sector. A Filecoin Piece can be of any size, but if a Piece is larger than the size of a Sector that the miner supports it has to be split into more Pieces so that each Piece fits into a Sector.
A Piece
is an object that represents a whole or part of a File
,
and is used by Storage Clients
and Storage Miners
in Deals
. Storage Clients
hire Storage Miners
to store Pieces
.
The Piece data structure is designed for proving storage of arbitrary IPLD graphs and client data. This diagram shows the detailed composition of a Piece and its proving tree, including both full and bandwidth-optimized Piece data structures.
Data Representation
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.piece.data-representation
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.piece.data-representation
It is important to highlight that data submitted to the Filecoin network go through several transformations before they come to the format at which the StorageProvider
stores it.
Below is the process followed from the point a user starts preparing a file to store in Filecoin to the point that the provider produces all the identifiers of Pieces stored in a Sector.
The first three steps take place on the client side.
-
When a client wants to store a file in the Filecoin network, they start by producing the IPLD DAG of the file. The hash that represents the root node of the DAG is an IPFS-style CID, called Payload CID.
-
In order to make a Filecoin Piece, the IPLD DAG is serialised into a “Content-Addressable aRchive” (.car) file, which is in raw bytes format. A CAR file is an opaque blob of data that packs together and transfers IPLD nodes. The Payload CID is common between the CAR’ed and un-CAR’ed constructions. This helps later during data retrieval, when data is transferred between the storage client and the storage provider as we discuss later.
-
The resulting .car file is padded with extra zero bits in order for the file to make a binary Merkle tree. To achieve a clean binary Merkle Tree the .car file size has to be in some power of two (^2) size. A padding process, called
Fr32 padding
, which adds two (2) zero bits to every 254 out of every 256 bits is applied to the input file. At the next step, the padding process takes the output of theFr32 padding
process and finds the size above it that makes for a power of two size. This gap between the result of theFr32 padding
and the next power of two size is padded with zeros.
In order to justify the reasoning behind these steps, it is important to understand the overall negotiation process between the StorageClient
and a StorageProvider
. The piece CID or CommP is what is included in the deal that the client negotiates and agrees with the storage provider. When the deal is agreed, the client sends the file to the provider (using GraphSync). The provider has to construct the CAR file out of the file received and derive the Piece CID on their side. In order to avoid the client sending a different file to the one agreed, the Piece CID that the provider generates has to be the same as the one included in the deal negotiated earlier.
The following steps take place on the StorageProvider
side (apart from step 4 that can also take place at the client side).
-
Once the
StorageProvider
receives the file from the client, they calculate the Merkle root out of the hashes of the Piece (padded .car file). The resulting root of the clean binary Merkle tree is the Piece CID. This is also referred to as CommP or Piece Commitment and as mentioned earlier, has to be the same with the one included in the deal. -
The Piece is included in a Sector together with data from other deals. The
StorageProvider
then calculates Merkle root for all the Pieces inside the Sector. The root of this tree is CommD (aka Commitment of Data orUnsealedSectorCID
). -
The
StorageProvider
is then sealing the sector and the root of the resulting Merkle root is the CommRLast. -
Proof of Replication (PoRep), SDR in particular, generates another Merkle root hash called CommC, as an attestation that replication of the data whose commitment is CommD has been performed correctly.
-
Finally, CommR (or Commitment of Replication) is the hash of CommC || CommRLast.
IMPORTANT NOTES:
Fr32
is a 32-bit representation of a field element (which, in our case, is the arithmetic field of BLS12-381). To be well-formed, a value of typeFr32
must actually fit within that field, but this is not enforced by the type system. It is an invariant which must be perserved by correct usage. In the case of so-calledFr32 padding
, two zero bits are inserted ‘after’ a number requiring at most 254 bits to represent. This guarantees that the result will beFr32
, regardless of the value of the initial 254 bits. This is a ‘conservative’ technique, since for some initial values, only one bit of zero-padding would actually be required.- Steps 2 and 3 above are specific to the Lotus implementation. The same outcome can be achieved in different ways, e.g., without using
Fr32
bit-padding. However, any implementation has to make sure that the initial IPLD DAG is serialised and padded so that it gives a clean binary tree, and therefore, calculating the Merkle root out of the resulting blob of data gives the same Piece CID. As long as this is the case, implementations can deviate from the first three steps above. - Finally, it is important to add a note related to the Payload CID (discussed in the first two steps above) and the data retrieval process. The retrieval deal is negotiated on the basis of the Payload CID. When the retrieval deal is agreed, the retrieval miner starts sending the unsealed and “un-CAR’ed” file to the client. The transfer starts from the root node of the IPLD Merkle Tree and in this way the client can validate the Payload CID from the beginning of the transfer and verify that the file they are receiving is the file they negotiated in the deal and not random bits.
PieceStore
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.piece.piecestore
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.piece.piecestore
The PieceStore
module allows for storage and retrieval of Pieces from some local storage. The piecestore’s main goal is to help the
storage and
retrieval market modules to find where sealed data lives inside of sectors. The storage market writes the data, and retrieval market reads it in order to send out to retrieval clients.
The implementation of the PieceStore module can be found here.
Data Transfer in Filecoin
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer
The Data Transfer Protocol is a protocol for transferring all or part of a Piece
across the network when a deal is made. The overall goal for the data transfer module is for it to be an abstraction of the underlying transport medium over which data is transferred between different parties in the Filecoin network. Currently, the underlying medium or protocol used to actually do the data transfer is GraphSync. As such, the Data Transfer Protocol can be thought of as a negotiation protocol.
The Data Transfer Protocol is used both for Storage and for Retrieval Deals. In both cases, the data transfer request is initiated by the client. The primary reason for this is that clients will more often than not be behind NATs and therefore, it is more convenient to start any data transfer from their side. In the case of Storage Deals the data transfer request is initiated as a push request to send data to the storage provider. In the case of Retrieval Deals the data transfer request is initiated as a pull request to retrieve data by the storage provider.
The request to initiate a data transfer includes a voucher or token (none to be confused with the Payment Channel voucher) that points to a specific deal that the two parties have agreed to before. This is so that the storage provider can identify and link the request to a deal it has agreed to and not disregard the request. As described below the case might be slightly different for retrieval deals, where both a deal proposal and a data transfer request can be sent at once.
Modules
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.modules
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.modules
This diagram shows how Data Transfer and its modules fit into the picture with the Storage and Retrieval Markets. In particular, note how the Data Transfer Request Validators from the markets are plugged into the Data Transfer module, but their code belongs in the Markets system.
Terminology
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.terminology
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.terminology
- Push Request: A request to send data to the other party - normally initiated by the client and primarily in case of a Storage Deal.
- Pull Request: A request to have the other party send data - normally initiated by the client and primarily in case of a Retrieval Deal.
- Requestor: The party that initiates the data transfer request (whether Push or Pull) - normally the client, at least as currently implemented in Filecoin, to overcome NAT-traversal problems.
- Responder: The party that receives the data transfer request - normally the storage provider.
- Data Transfer Voucher or Token: A wrapper around storage- or retrieval-related data that can identify and validate the transfer request to the other party.
- Request Validator: The data transfer module only initiates a transfer when the responder can validate that the request is tied directly to either an existing storage or retrieval deal. Validation is not performed by the data transfer module itself. Instead, a request validator inspects the data transfer voucher to determine whether to respond to the request or disregard the request.
- Transporter: Once a request is negotiated and validated, the actual transfer is managed by a transporter on both sides. The transporter is part of the data transfer module but is isolated from the negotiation process. It has access to an underlying verifiable transport protocol and uses it to send data and track progress.
- Subscriber: An external component that monitors progress of a data transfer by subscribing to data transfer events, such as progress or completion.
- GraphSync: The default underlying transport protocol used by the Transporter. The full graphsync specification can be found here
Request Phases
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.request-phases
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.request-phases
There are two basic phases to any data transfer:
- Negotiation: the requestor and responder agree to the transfer by validating it with the data transfer voucher.
- Transfer: once the negotiation phase is complete, the data is actually transferred. The default protocol used to do the transfer is Graphsync.
Note that the Negotiation and Transfer stages can occur in separate round trips,
or potentially the same round trip, where the requesting party implicitly agrees by sending the request, and the responding party can agree and immediately send or receive data. Whether the process is taking place in a single or multiple round-trips depends in part on whether the request is a push request (storage deal) or a pull request (retrieval deal), and on whether the data transfer negotiation process is able to piggy back on the underlying transport mechanism.
In the case of GraphSync as transport mechanism, data transfer requests can piggy back as an extension to the GraphSync protocol using
GraphSync’s built-in extensibility. So, only a single round trip is required for Pull Requests. However, because Graphsync is a request/response protocol with no direct support for push
type requests, in the Push case, negotiation happens in a seperate request over data transfer’s own libp2p protocol /fil/datatransfer/1.0.0
. Other future transport mechanisms might handle both Push and Pull, either, or neither as a single round trip.
Upon receiving a data transfer request, the data transfer module does the decoding the voucher and delivers it to the request validators. In storage deals, the request validator checks if the deal included is one that the recipient has agreed to before. For retrieval deals the request includes the proposal for the retrieval deal itself. As long as request validator accepts the deal proposal, everything is done at once as a single round-trip.
It is worth noting that in the case of retrieval the provider can accept the deal and the data transfer request, but then pause the retrieval itself in order to carry out the unsealing process. The storage provider has to unseal all of the requested data before initiating the actual data transfer. Furthermore, the storage provider has the option of pausing the retrieval flow before starting the unsealing process in order to ask for an unsealing payment request. Storage providers have the option to request for this payment in order to cover unsealing computation costs and avoid falling victims of misbehaving clients.
Example Flows
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.example-flows
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.example-flows
Push Flow
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.push-flow
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.push-flow
- A requestor initiates a Push transfer when it wants to send data to another party.
- The requestors’ data transfer module will send a push request to the responder along with the data transfer voucher.
- The responder’s data transfer module validates the data transfer request via the Validator provided as a dependency by the responder.
- The responder’s data transfer module initiates the transfer by making a GraphSync request.
- The requestor receives the GraphSync request, verifies that it recognises the data transfer and begins sending data.
- The responder receives data and can produce an indication of progress.
- The responder completes receiving data, and notifies any listeners.
The push flow is ideal for storage deals, where the client initiates the data transfer straightaway once the provider indicates their intent to accept and publish the client’s deal proposal.
Pull Flow - Single Round Trip
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.pull-flow---single-round-trip
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.pull-flow---single-round-trip
- A requestor initiates a Pull transfer when it wants to receive data from another party.
- The requestor’s data transfer module initiates the transfer by making a pull request embedded in the GraphSync request to the responder. The request includes the data transfer voucher.
- The responder receives the GraphSync request, and forwards the data transfer request to the data transfer module.
- The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder.
- The responder accepts the GraphSync request and sends the accepted response along with the data transfer level acceptance response.
- The requestor receives data and can produce an indication of progress. This timing of this step comes later in time, after the storage provider has finished unsealing the data.
- The requestor completes receiving data, and notifies any listeners.
Protocol
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.protocol
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.protocol
A data transfer CAN be negotiated over the network via the Data Transfer Protocol, a libp2p protocol type.
Using the Data Transfer Protocol as an independent libp2p communication mechanism is not a hard requirement – as long as both parties have an implementation of the Data Transfer Subsystem that can talk to the other, any transport mechanism (including offline mechanisms) is acceptable.
Data Structures
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.data_transfer.data-structures
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.data_transfer.data-structures
package datatransfer
import (
"fmt"
"time"
"github.com/ipfs/go-cid"
"github.com/ipld/go-ipld-prime"
"github.com/ipld/go-ipld-prime/datamodel"
"github.com/libp2p/go-libp2p/core/peer"
cbg "github.com/whyrusleeping/cbor-gen"
)
//go:generate cbor-gen-for ChannelID ChannelStages ChannelStage Log
// TypeIdentifier is a unique string identifier for a type of encodable object in a
// registry
type TypeIdentifier string
// EmptyTypeIdentifier means there is no voucher present
const EmptyTypeIdentifier = TypeIdentifier("")
// TypedVoucher is a voucher or voucher result in IPLD form and an associated
// type identifier for that voucher or voucher result
type TypedVoucher struct {
Voucher datamodel.Node
Type TypeIdentifier
}
// Equals is a utility to compare that two TypedVouchers are the same - both type
// and the voucher's IPLD content
func (tv1 TypedVoucher) Equals(tv2 TypedVoucher) bool {
return tv1.Type == tv2.Type && ipld.DeepEqual(tv1.Voucher, tv2.Voucher)
}
// TransferID is an identifier for a data transfer, shared between
// request/responder and unique to the requester
type TransferID uint64
// ChannelID is a unique identifier for a channel, distinct by both the other
// party's peer ID + the transfer ID
type ChannelID struct {
Initiator peer.ID
Responder peer.ID
ID TransferID
}
func (c ChannelID) String() string {
return fmt.Sprintf("%s-%s-%d", c.Initiator, c.Responder, c.ID)
}
// OtherParty returns the peer on the other side of the request, depending
// on whether this peer is the initiator or responder
func (c ChannelID) OtherParty(thisPeer peer.ID) peer.ID {
if thisPeer == c.Initiator {
return c.Responder
}
return c.Initiator
}
// Channel represents all the parameters for a single data transfer
type Channel interface {
// TransferID returns the transfer id for this channel
TransferID() TransferID
// BaseCID returns the CID that is at the root of this data transfer
BaseCID() cid.Cid
// Selector returns the IPLD selector for this data transfer (represented as
// an IPLD node)
Selector() datamodel.Node
// Voucher returns the initial voucher for this data transfer
Voucher() TypedVoucher
// Sender returns the peer id for the node that is sending data
Sender() peer.ID
// Recipient returns the peer id for the node that is receiving data
Recipient() peer.ID
// TotalSize returns the total size for the data being transferred
TotalSize() uint64
// IsPull returns whether this is a pull request
IsPull() bool
// ChannelID returns the ChannelID for this request
ChannelID() ChannelID
// OtherPeer returns the counter party peer for this channel
OtherPeer() peer.ID
}
// ChannelState is channel parameters plus it's current state
type ChannelState interface {
Channel
// SelfPeer returns the peer this channel belongs to
SelfPeer() peer.ID
// Status is the current status of this channel
Status() Status
// Sent returns the number of bytes sent
Sent() uint64
// Received returns the number of bytes received
Received() uint64
// Message offers additional information about the current status
Message() string
// Vouchers returns all vouchers sent on this channel
Vouchers() []TypedVoucher
// VoucherResults are results of vouchers sent on the channel
VoucherResults() []TypedVoucher
// LastVoucher returns the last voucher sent on the channel
LastVoucher() TypedVoucher
// LastVoucherResult returns the last voucher result sent on the channel
LastVoucherResult() TypedVoucher
// ReceivedCidsTotal returns the number of (non-unique) cids received so far
// on the channel - note that a block can exist in more than one place in the DAG
ReceivedCidsTotal() int64
// QueuedCidsTotal returns the number of (non-unique) cids queued so far
// on the channel - note that a block can exist in more than one place in the DAG
QueuedCidsTotal() int64
// SentCidsTotal returns the number of (non-unique) cids sent so far
// on the channel - note that a block can exist in more than one place in the DAG
SentCidsTotal() int64
// Queued returns the number of bytes read from the node and queued for sending
Queued() uint64
// DataLimit is the maximum data that can be transferred on this channel before
// revalidation. 0 indicates no limit.
DataLimit() uint64
// RequiresFinalization indicates at the end of the transfer, the channel should
// be left open for a final settlement
RequiresFinalization() bool
// InitiatorPaused indicates whether the initiator of this channel is in a paused state
InitiatorPaused() bool
// ResponderPaused indicates whether the responder of this channel is in a paused state
ResponderPaused() bool
// BothPaused indicates both sides of the transfer have paused the transfer
BothPaused() bool
// SelfPaused indicates whether the local peer for this channel is in a paused state
SelfPaused() bool
// Stages returns the timeline of events this data transfer has gone through,
// for observability purposes.
//
// It is unsafe for the caller to modify the return value, and changes
// may not be persisted. It should be treated as immutable.
Stages() *ChannelStages
}
// ChannelStages captures a timeline of the progress of a data transfer channel,
// grouped by stages.
//
// EXPERIMENTAL; subject to change.
type ChannelStages struct {
// Stages contains an entry for every stage the channel has gone through.
// Each stage then contains logs.
Stages []*ChannelStage
}
// ChannelStage traces the execution of a data transfer channel stage.
//
// EXPERIMENTAL; subject to change.
type ChannelStage struct {
// Human-readable fields.
// TODO: these _will_ need to be converted to canonical representations, so
// they are machine readable.
Name string
Description string
// Timestamps.
// TODO: may be worth adding an exit timestamp. It _could_ be inferred from
// the start of the next stage, or from the timestamp of the last log line
// if this is a terminal stage. But that's non-determistic and it relies on
// assumptions.
CreatedTime cbg.CborTime
UpdatedTime cbg.CborTime
// Logs contains a detailed timeline of events that occurred inside
// this stage.
Logs []*Log
}
// Log represents a point-in-time event that occurred inside a channel stage.
//
// EXPERIMENTAL; subject to change.
type Log struct {
// Log is a human readable message.
//
// TODO: this _may_ need to be converted to a canonical data model so it
// is machine-readable.
Log string
UpdatedTime cbg.CborTime
}
// AddLog adds a log to the specified stage, creating the stage if
// it doesn't exist yet.
//
// EXPERIMENTAL; subject to change.
func (cs *ChannelStages) AddLog(stage, msg string) {
if cs == nil {
return
}
now := curTime()
st := cs.GetStage(stage)
if st == nil {
st = &ChannelStage{
CreatedTime: now,
}
cs.Stages = append(cs.Stages, st)
}
st.Name = stage
st.UpdatedTime = now
if msg != "" && (len(st.Logs) == 0 || st.Logs[len(st.Logs)-1].Log != msg) {
// only add the log if it's not a duplicate.
st.Logs = append(st.Logs, &Log{msg, now})
}
}
// GetStage returns the ChannelStage object for a named stage, or nil if not found.
//
// TODO: the input should be a strongly-typed enum instead of a free-form string.
// TODO: drop Get from GetStage to make this code more idiomatic. Return a
//
// second ok boolean to make it even more idiomatic.
//
// EXPERIMENTAL; subject to change.
func (cs *ChannelStages) GetStage(stage string) *ChannelStage {
if cs == nil {
return nil
}
for _, s := range cs.Stages {
if s.Name == stage {
return s
}
}
return nil
}
func curTime() cbg.CborTime {
now := time.Now()
return cbg.CborTime(time.Unix(0, now.UnixNano()).UTC())
}
package datatransfer
import "github.com/filecoin-project/go-statemachine/fsm"
// Status is the status of transfer for a given channel
type Status uint64
const (
// Requested means a data transfer was requested by has not yet been approved
Requested Status = iota
// Ongoing means the data transfer is in progress
Ongoing
// TransferFinished indicates the initiator is done sending/receiving
// data but is awaiting confirmation from the responder
TransferFinished
// ResponderCompleted indicates the initiator received a message from the
// responder that it's completed
ResponderCompleted
// Finalizing means the responder is awaiting a final message from the initator to
// consider the transfer done
Finalizing
// Completing just means we have some final cleanup for a completed request
Completing
// Completed means the data transfer is completed successfully
Completed
// Failing just means we have some final cleanup for a failed request
Failing
// Failed means the data transfer failed
Failed
// Cancelling just means we have some final cleanup for a cancelled request
Cancelling
// Cancelled means the data transfer ended prematurely
Cancelled
// DEPRECATED: Use InitiatorPaused() method on ChannelState
InitiatorPaused
// DEPRECATED: Use ResponderPaused() method on ChannelState
ResponderPaused
// DEPRECATED: Use BothPaused() method on ChannelState
BothPaused
// ResponderFinalizing is a unique state where the responder is awaiting a final voucher
ResponderFinalizing
// ResponderFinalizingTransferFinished is a unique state where the responder is awaiting a final voucher
// and we have received all data
ResponderFinalizingTransferFinished
// ChannelNotFoundError means the searched for data transfer does not exist
ChannelNotFoundError
// Queued indicates a data transfer request has been accepted, but is not actively transfering yet
Queued
// AwaitingAcceptance indicates a transfer request is actively being processed by the transport
// even if the remote has not yet responded that it's accepted the transfer. Such a state can
// occur, for example, in a requestor-initiated transfer that starts processing prior to receiving
// acceptance from the server.
AwaitingAcceptance
)
type statusList []Status
func (sl statusList) Contains(s Status) bool {
for _, ts := range sl {
if ts == s {
return true
}
}
return false
}
func (sl statusList) AsFSMStates() []fsm.StateKey {
sk := make([]fsm.StateKey, 0, len(sl))
for _, s := range sl {
sk = append(sk, s)
}
return sk
}
var NotAcceptedStates = statusList{
Requested,
AwaitingAcceptance,
Cancelled,
Cancelling,
Failed,
Failing,
ChannelNotFoundError}
func (s Status) IsAccepted() bool {
return !NotAcceptedStates.Contains(s)
}
func (s Status) String() string {
return Statuses[s]
}
var FinalizationStatuses = statusList{Finalizing, Completed, Completing}
func (s Status) InFinalization() bool {
return FinalizationStatuses.Contains(s)
}
var TransferCompleteStates = statusList{
TransferFinished,
ResponderFinalizingTransferFinished,
Finalizing,
Completed,
Completing,
Failing,
Failed,
Cancelling,
Cancelled,
ChannelNotFoundError,
}
func (s Status) TransferComplete() bool {
return TransferCompleteStates.Contains(s)
}
var TransferringStates = statusList{
Ongoing,
ResponderCompleted,
ResponderFinalizing,
AwaitingAcceptance,
}
func (s Status) Transferring() bool {
return TransferringStates.Contains(s)
}
// Statuses are human readable names for data transfer states
var Statuses = map[Status]string{
// Requested means a data transfer was requested by has not yet been approved
Requested: "Requested",
Ongoing: "Ongoing",
TransferFinished: "TransferFinished",
ResponderCompleted: "ResponderCompleted",
Finalizing: "Finalizing",
Completing: "Completing",
Completed: "Completed",
Failing: "Failing",
Failed: "Failed",
Cancelling: "Cancelling",
Cancelled: "Cancelled",
InitiatorPaused: "InitiatorPaused",
ResponderPaused: "ResponderPaused",
BothPaused: "BothPaused",
ResponderFinalizing: "ResponderFinalizing",
ResponderFinalizingTransferFinished: "ResponderFinalizingTransferFinished",
ChannelNotFoundError: "ChannelNotFoundError",
Queued: "Queued",
AwaitingAcceptance: "AwaitingAcceptance",
}
type Manager interface {
// Start initializes data transfer processing
Start(ctx context.Context) error
// OnReady registers a listener for when the data transfer comes on line
OnReady(ReadyFunc)
// Stop terminates all data transfers and ends processing
Stop(ctx context.Context) error
// RegisterVoucherType registers a validator for the given voucher type
// will error if voucher type does not implement voucher
// or if there is a voucher type registered with an identical identifier
RegisterVoucherType(voucherType TypeIdentifier, validator RequestValidator) error
// RegisterTransportConfigurer registers the given transport configurer to be run on requests with the given voucher
// type
RegisterTransportConfigurer(voucherType TypeIdentifier, configurer TransportConfigurer) error
// open a data transfer that will send data to the recipient peer and
// transfer parts of the piece that match the selector
OpenPushDataChannel(ctx context.Context, to peer.ID, voucher TypedVoucher, baseCid cid.Cid, selector datamodel.Node, options ...TransferOption) (ChannelID, error)
// open a data transfer that will request data from the sending peer and
// transfer parts of the piece that match the selector
OpenPullDataChannel(ctx context.Context, to peer.ID, voucher TypedVoucher, baseCid cid.Cid, selector datamodel.Node, options ...TransferOption) (ChannelID, error)
// send an intermediate voucher as needed when the receiver sends a request for revalidation
SendVoucher(ctx context.Context, chid ChannelID, voucher TypedVoucher) error
// send information from the responder to update the initiator on the state of their voucher
SendVoucherResult(ctx context.Context, chid ChannelID, voucherResult TypedVoucher) error
// Update the validation status for a given channel, to change data limits, finalization, accepted status, and pause state
// and send new voucher results as
UpdateValidationStatus(ctx context.Context, chid ChannelID, validationResult ValidationResult) error
// close an open channel (effectively a cancel)
CloseDataTransferChannel(ctx context.Context, chid ChannelID) error
// pause a data transfer channel (only allowed if transport supports it)
PauseDataTransferChannel(ctx context.Context, chid ChannelID) error
// resume a data transfer channel (only allowed if transport supports it)
ResumeDataTransferChannel(ctx context.Context, chid ChannelID) error
// get status of a transfer
TransferChannelStatus(ctx context.Context, x ChannelID) Status
// get channel state
ChannelState(ctx context.Context, chid ChannelID) (ChannelState, error)
// get notified when certain types of events happen
SubscribeToEvents(subscriber Subscriber) Unsubscribe
// get all in progress transfers
InProgressChannels(ctx context.Context) (map[ChannelID]ChannelState, error)
// RestartDataTransferChannel restarts an existing data transfer channel
RestartDataTransferChannel(ctx context.Context, chid ChannelID) error
}
Data Formats and Serialization
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.serialization
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.serialization
Filecoin seeks to make use of as few data formats as needed, with well-specced serialization rules to better protocol security through simplicity and enable interoperability amongst implementations of the Filecoin protocol.
Read more on design considerations here for CBOR-usage and here for int types in Filecoin.
Data Formats
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.serialization.data-formats
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.serialization.data-formats
Filecoin in-memory data types are mostly straightforward. Implementations should support two integer types: Int (meaning native 64-bit integer), and BigInt (meaning arbitrary length) and avoid dealing with floating-point numbers to minimize interoperability issues across programming languages and implementations.
You can also read more on data formats as part of randomness generation in the Filecoin protocol.
Serialization
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_files.serialization.serialization
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_files.serialization.serialization
Data Serialization
in Filecoin ensures a consistent format for serializing in-memory data for transfer
in-flight and in-storage. Serialization is critical to protocol security and interoperability across
implementations of the Filecoin protocol, enabling consistent state updates across Filecoin nodes.
All data structures in Filecoin are CBOR-tuple encoded. That is, any data structures used in the Filecoin system (structs in this spec) should be serialized as CBOR-arrays with items corresponding to the data structure fields in their order of declaration.
You can find the encoding structure for major data types in CBOR here.
For illustration, an in-memory map would be represented as a CBOR-array of the keys and values listed in some pre-determined order. A near-term update to the serialization format will involve tagging fields appropriately to ensure appropriate serialization/deserialization as the protocol evolves.
Virtual Machine
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_vm
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_vm
An Actor in the Filecoin Blockchain is the equivalent of the smart contract in the Ethereum Virtual Machine.
The Filecoin Virtual Machine (VM) is the system component that is in charge of execution of all actors code. Execution of actors on the Filecoin VM (i.e., on-chain executions) incur a gas cost.
Any operation applied (i.e., executed) on the Filecoin VM produces an output in the form of a State Tree (discussed below). The latest State Tree is the current source of truth in the Filecoin Blockchain. The State Tree is identified by a CID, which is stored in the IPLD store.
VM Actor Interface
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.actor
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.actor
As mentioned above, Actors are the Filecoin equivalent of smart contracts in the Ethereum Virtual Machine. As such, Actors are very central components of the system. Any change to the current state of the Filecoin blockchain has to be triggered through an actor method invocation.
This sub-section describes the interface between Actors and the Filecoin Virtual Machine. This means that most of what is described below does not strictly belong to the VM. Instead it is logic that sits on the interface between the VM and Actors logic.
There are eleven (11) types of builtin Actors in total, not all of which interact with the VM. Some Actors do not invoke changes to the StateTree of the blockchain and therefore, do not need to have an interface to the VM. We discuss the details of all System Actors later on in the System Actors subsection.
The actor address is a stable address generated by hashing the sender’s public key and a creation nonce. It should be stable across chain re-organizations. The actor ID address on the other hand, is an auto-incrementing address that is compact but can change in case of chain re-organizations. That being said, after being created, actors should use an actor address.
package builtin
import (
addr "github.com/filecoin-project/go-address"
)
// Addresses for singleton system actors.
var (
// Distinguished AccountActor that is the source of system implicit messages.
SystemActorAddr = mustMakeAddress(0)
InitActorAddr = mustMakeAddress(1)
RewardActorAddr = mustMakeAddress(2)
CronActorAddr = mustMakeAddress(3)
StoragePowerActorAddr = mustMakeAddress(4)
StorageMarketActorAddr = mustMakeAddress(5)
VerifiedRegistryActorAddr = mustMakeAddress(6)
// Distinguished AccountActor that is the destination of all burnt funds.
BurntFundsActorAddr = mustMakeAddress(99)
)
const FirstNonSingletonActorId = 100
func mustMakeAddress(id uint64) addr.Address {
address, err := addr.NewIDAddress(id)
if err != nil {
panic(err)
}
return address
}
The ActorState
structure is composed of the actor’s balance, in terms of tokens held by this actor, as well as a group of state methods used to query, inspect and interact with chain state.
State Tree
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.state_tree
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.state_tree
The State Tree is the output of the execution of any operation applied on the Filecoin Blockchain. The on-chain (i.e., VM) state data structure is a map (in the form of a Hash Array Mapped Trie - HAMT) that binds addresses to actor states. The current State Tree function is called by the VM upon every actor method invocation.
type StateTree struct {
root adt.Map
version types.StateTreeVersion
info cid.Cid
Store cbor.IpldStore
lookupIDFun func(address.Address) (address.Address, error)
snaps *stateSnaps
}
VM Message - Actor Method Invocation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.message
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.message
A message is the unit of communication between two actors, and thus the primitive cause of changes in state. A message combines:
- a token amount to be transferred from the sender to the receiver, and
- a method with parameters to be invoked on the receiver (optional/where applicable).
Actor code may send additional messages to other actors while processing a received message. Messages are processed synchronously, that is, an actor waits for a sent message to complete before resuming control.
The processing of a message consumes units of computation and storage, both of which are denominated in gas. A message’s gas limit provides an upper bound on the computation required to process it. The sender of a message pays for the gas units consumed by a message’s execution (including all nested messages) at a gas price they determine. A block producer chooses which messages to include in a block and is rewarded according to each message’s gas price and consumption, forming a market.
Message syntax validation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.message.message-syntax-validation
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.message.message-syntax-validation
A syntactically invalid message must not be transmitted, retained in a message pool, or included in a block. If an invalid message is received, it should be dropped and not propagated further.
When transmitted individually (before inclusion in a block), a message is packaged as
SignedMessage
, regardless of signature scheme used. A valid signed message has a total serialized size no greater than message.MessageMaxSize
.
type SignedMessage struct {
Message Message
Signature crypto.Signature
}
A syntactically valid UnsignedMessage
:
- has a well-formed, non-empty
To
address, - has a well-formed, non-empty
From
address, - has
Value
no less than zero and no greater than the total token supply (2e9 * 1e18
), and - has non-negative
GasPrice
, - has
GasLimit
that is at least equal to the gas consumption associated with the message’s serialized bytes, - has
GasLimit
that is no greater than the block gas limit network parameter.
type Message struct {
// Version of this message (has to be non-negative)
Version uint64
// Address of the receiving actor.
To address.Address
// Address of the sending actor.
From address.Address
CallSeqNum uint64
// Value to transfer from sender's to receiver's balance.
Value BigInt
// GasPrice is a Gas-to-FIL cost
GasPrice BigInt
// Maximum Gas to be spent on the processing of this message
GasLimit int64
// Optional method to invoke on receiver, zero for a plain value transfer.
Method abi.MethodNum
//Serialized parameters to the method.
Params []byte
}
There should be several functions able to extract information from the Message struct
, such as the sender and recipient addresses, the value to be transferred, the required funds to execute the message and the CID of the message.
Given that Messages should eventually be included in a Block and added to the blockchain, the validity of the message should be checked with regard to the sender and the receiver of the message, the value (which should be non-negative and always smaller than the circulating supply), the gas price (which again should be non-negative) and the BlockGasLimit
which should not be greater than the block’s gas limit.
Message semantic validation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.message.message-semantic-validation
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.message.message-semantic-validation
Semantic validation refers to validation requiring information outside of the message itself.
A semantically valid SignedMessage
must carry a signature that verifies the payload as having
been signed with the public key of the account actor identified by the From
address.
Note that when the From
address is an ID-address, the public key must be
looked up in the state of the sending account actor in the parent state identified by the block.
Note: the sending actor must exist in the parent state identified by the block that includes the message. This means that it is not valid for a single block to include a message that creates a new account actor and a message from that same actor. The first message from that actor must wait until a subsequent epoch. Message pools may exclude messages from an actor that is not yet present in the chain state.
There is no further semantic validation of a message that can cause a block including the message
to be invalid. Every syntactically valid and correctly signed message can be included in a block and
will produce a receipt from execution. The MessageReceipt sturct
includes the following:
type MessageReceipt struct {
ExitCode exitcode.ExitCode
Return []byte
GasUsed int64
}
However, a message may fail to execute to completion, in which case it will not trigger the desired state change.
The reason for this “no message semantic validation” policy is that the state that a message will be applied to cannot be known before the message is executed as part of a tipset. A block producer does not know whether another block will precede it in the tipset, thus altering the state to which the block’s messages will apply from the declared parent state.
package types
import (
"bytes"
"encoding/json"
"fmt"
block "github.com/ipfs/go-block-format"
"github.com/ipfs/go-cid"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/lotus/build/buildconstants"
)
const MessageVersion = 0
type ChainMsg interface {
Cid() cid.Cid
VMMessage() *Message
ToStorageBlock() (block.Block, error)
// FIXME: This is the *message* length, this name is misleading.
ChainLength() int
}
type Message struct {
Version uint64
To address.Address
From address.Address
Nonce uint64
Value abi.TokenAmount
GasLimit int64
GasFeeCap abi.TokenAmount
GasPremium abi.TokenAmount
Method abi.MethodNum
Params []byte
}
func (m *Message) Caller() address.Address {
return m.From
}
func (m *Message) Receiver() address.Address {
return m.To
}
func (m *Message) ValueReceived() abi.TokenAmount {
return m.Value
}
func DecodeMessage(b []byte) (*Message, error) {
var msg Message
if err := msg.UnmarshalCBOR(bytes.NewReader(b)); err != nil {
return nil, err
}
if msg.Version != MessageVersion {
return nil, fmt.Errorf("decoded message had incorrect version (%d)", msg.Version)
}
return &msg, nil
}
func (m *Message) Serialize() ([]byte, error) {
buf := new(bytes.Buffer)
if err := m.MarshalCBOR(buf); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (m *Message) ChainLength() int {
ser, err := m.Serialize()
if err != nil {
panic(err)
}
return len(ser)
}
func (m *Message) ToStorageBlock() (block.Block, error) {
data, err := m.Serialize()
if err != nil {
return nil, err
}
c, err := abi.CidBuilder.Sum(data)
if err != nil {
return nil, err
}
return block.NewBlockWithCid(data, c)
}
func (m *Message) Cid() cid.Cid {
b, err := m.ToStorageBlock()
if err != nil {
panic(fmt.Sprintf("failed to marshal message: %s", err)) // I think this is maybe sketchy, what happens if we try to serialize a message with an undefined address in it?
}
return b.Cid()
}
type mCid struct {
*RawMessage
CID cid.Cid
}
type RawMessage Message
func (m *Message) MarshalJSON() ([]byte, error) {
return json.Marshal(&mCid{
RawMessage: (*RawMessage)(m),
CID: m.Cid(),
})
}
func (m *Message) RequiredFunds() BigInt {
return BigMul(m.GasFeeCap, NewInt(uint64(m.GasLimit)))
}
func (m *Message) VMMessage() *Message {
return m
}
func (m *Message) Equals(o *Message) bool {
return m.Cid() == o.Cid()
}
func (m *Message) EqualCall(o *Message) bool {
m1 := *m
m2 := *o
m1.GasLimit, m2.GasLimit = 0, 0
m1.GasFeeCap, m2.GasFeeCap = big.Zero(), big.Zero()
m1.GasPremium, m2.GasPremium = big.Zero(), big.Zero()
return (&m1).Equals(&m2)
}
func (m *Message) ValidForBlockInclusion(minGas int64, version network.Version) error {
if m.Version != 0 {
return xerrors.New("'Version' unsupported")
}
if m.To == address.Undef {
return xerrors.New("'To' address cannot be empty")
}
if m.To == buildconstants.ZeroAddress && version >= network.Version7 {
return xerrors.New("invalid 'To' address")
}
if !abi.AddressValidForNetworkVersion(m.To, version) {
return xerrors.New("'To' address protocol unsupported for network version")
}
if m.From == address.Undef {
return xerrors.New("'From' address cannot be empty")
}
if !abi.AddressValidForNetworkVersion(m.From, version) {
return xerrors.New("'From' address protocol unsupported for network version")
}
if m.Value.Int == nil {
return xerrors.New("'Value' cannot be nil")
}
if m.Value.LessThan(big.Zero()) {
return xerrors.New("'Value' field cannot be negative")
}
if m.Value.GreaterThan(TotalFilecoinInt) {
return xerrors.New("'Value' field cannot be greater than total filecoin supply")
}
if m.GasFeeCap.Int == nil {
return xerrors.New("'GasFeeCap' cannot be nil")
}
if m.GasFeeCap.LessThan(big.Zero()) {
return xerrors.New("'GasFeeCap' field cannot be negative")
}
if m.GasPremium.Int == nil {
return xerrors.New("'GasPremium' cannot be nil")
}
if m.GasPremium.LessThan(big.Zero()) {
return xerrors.New("'GasPremium' field cannot be negative")
}
if m.GasPremium.GreaterThan(m.GasFeeCap) {
return xerrors.New("'GasFeeCap' less than 'GasPremium'")
}
if m.GasLimit > buildconstants.BlockGasLimit {
return xerrors.Errorf("'GasLimit' field cannot be greater than a block's gas limit (%d > %d)", m.GasLimit, buildconstants.BlockGasLimit)
}
if m.GasLimit <= 0 {
return xerrors.Errorf("'GasLimit' field %d must be positive", m.GasLimit)
}
// since prices might vary with time, this is technically semantic validation
if m.GasLimit < minGas {
return xerrors.Errorf("'GasLimit' field cannot be less than the cost of storing a message on chain %d < %d", m.GasLimit, minGas)
}
return nil
}
// EffectiveGasPremium returns the effective gas premium claimable by the miner
// given the supplied base fee. This method is not used anywhere except the Eth API.
//
// Filecoin clamps the gas premium at GasFeeCap - BaseFee, if lower than the
// specified premium. Returns 0 if GasFeeCap is less than BaseFee.
func (m *Message) EffectiveGasPremium(baseFee abi.TokenAmount) abi.TokenAmount {
available := big.Sub(m.GasFeeCap, baseFee)
// It's possible that storage providers may include messages with gasFeeCap less than the baseFee
// In such cases, their reward should be viewed as zero
if available.LessThan(big.NewInt(0)) {
available = big.NewInt(0)
}
if big.Cmp(m.GasPremium, available) <= 0 {
return m.GasPremium
}
return available
}
const TestGasLimit = 100e6
VM Runtime Environment (Inside the VM)
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_vm.runtime
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_vm.runtime
Receipts
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_vm.runtime.receipts
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_vm.runtime.receipts
A MessageReceipt
contains the result of a top-level message execution. Every syntactically valid and correctly signed message can be included in a block and will produce a receipt from execution.
A syntactically valid receipt has:
- a non-negative
ExitCode
, - a non empty
Return
value only if the exit code is zero, and - a non-negative
GasUsed
.
type MessageReceipt struct {
ExitCode exitcode.ExitCode
Return []byte
GasUsed int64
}
vm/runtime
Actors Interface
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_vm.runtime.vmruntime-actors-interface
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_vm.runtime.vmruntime-actors-interface
The Actors Interface implementation can be found here
vm/runtime
VM Implementation
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_vm.runtime.vmruntime-vm-implementation
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_vm.runtime.vmruntime-vm-implementation
The Lotus implementation of the Filecoin Virtual Machine runtime can be found here
Exit Codes
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_vm.runtime.exit-codes
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_vm.runtime.exit-codes
There are some common runtime exit codes that are shared by different actors. Their definition can be found here.
Gas Fees
-
State
reliable
-
Theory Audit
coming
-
Edit this section
-
section-systems.filecoin_vm.gas_fee
-
State
reliable
-
Theory Audit
coming
- Edit this section
-
section-systems.filecoin_vm.gas_fee
Summary
-
State
reliable
-
Theory Audit
coming
-
Edit this section
-
section-systems.filecoin_vm.gas_fee.summary
-
State
reliable
-
Theory Audit
coming
- Edit this section
-
section-systems.filecoin_vm.gas_fee.summary
As is traditionally the case with many blockchains, Gas is a unit of measure of how much storage and/or compute resource an on-chain message operation consumes in order to be executed. At a high level, it works as follows: the message sender specifies the maximum amount they are willing to pay in order for their message to be executed and included in a block. This is specified both in terms of total number of units of gas (GasLimit
), which is generally expected to be higher than the actual GasUsed
and in terms of the price (or fee) per unit of gas (GasFeeCap
).
Traditionally, GasUsed * GasFeeCap
goes to the block producing miner as a reward. The result of this product is treated as the priority fee for message inclusion, that is, messages are ordered in decreasing sequence and those with the highest GasUsed * GasFeeCap
are prioritised, given that they return more profit to the miner.
However, it has been observed that this tactic (of paying GasUsed * GasFee
) is problematic for block producing miners for a few reasons. Firstly, a block producing miner may include a very expensive message (in terms of chain resources required) for free in which case the chain itself needs to bear the cost. Secondly, message senders can set arbitrarily high prices but for low-cost messages (again, in terms of chain resources), leading to a DoS vulnerability.
In order to overcome this situation, the Filecoin blockchain defines a BaseFee
, which is burnt for every message. The rationale is that given that Gas is a measure of on-chain resource consumption, it makes sense for it to be burned, as compared to be rewarded to miners. This way, fee manipulation from miners is avoided. The BaseFee
is dynamic, adjusted automatically according to network congestion. This fact, makes the network resilient against spam attacks. Given that the network load increases during SPAM attacks, maintaining full blocks of SPAM messages for an extended period of time is impossible for an attacker due to the increasing BaseFee
.
Finally, GasPremium
is the priority fee included by senders to incentivize miners to pick the most profitable messages. In other words, if a message sender wants its message to be included more quickly, they can set a higher GasPremium
.
Parameters
-
State
reliable
-
Theory Audit
coming
-
Edit this section
-
section-systems.filecoin_vm.gas_fee.parameters
-
State
reliable
-
Theory Audit
coming
- Edit this section
-
section-systems.filecoin_vm.gas_fee.parameters
GasUsed
is a measure of the amount of resources (or units of gas) consumed, in order to execute a message. Each unit of gas is measured in attoFIL and therefore,GasUsed
is a number that represents the units of energy consumed.GasUsed
is independent of whether a message was executed correctly or failed.BaseFee
is the set price per unit of gas (measured in attoFIL/gas unit) to be burned (sent to an unrecoverable address) for every message execution. The value of theBaseFee
is dynamic and adjusts according to current network congestion parameters. For example, when the network exceeds 5B gas limit usage, theBaseFee
increases and the opposite happens when gas limit usage falls below 5B. TheBaseFee
applied to each block should be included in the block itself. It should be possible to get the value of the currentBaseFee
from the head of the chain. TheBaseFee
applies per unit ofGasUsed
and therefore, the total amount of gas burned for a message isBaseFee * GasUsed
. Note that theBaseFee
is incurred for every message, but its value is the same for all messages in the same block.GasLimit
is measured in units of gas and set by the message sender. It imposes a hard limit on the amount of gas (i.e., number of units of gas) that a message’s execution should be allowed to consume on chain. A message consumes gas for every fundamental operation it triggers, and a message that runs out of gas fails. When a message fails, every modification to the state that happened as a result of this message’s execution is reverted back to its previous state. Independently of whether a message execution was successful or not, the miner will receive a reward for the resources they consumed to execute the message (seeGasPremium
below).GasFeeCap
is the maximum price that the message sender is willing to pay per unit of gas (measured in attoFIL/gas unit). Together with theGasLimit
, theGasFeeCap
is setting the maximum amount of FIL that a sender will pay for a message: a sender is guaranteed that a message will never cost them more thanGasLimit * GasFeeCap
attoFIL (not including any Premium that the message includes for its recipient).GasPremium
is the price per unit of gas (measured in attoFIL/gas) that the message sender is willing to pay (on top of theBaseFee
) to “tip” the miner that will include this message in a block. A message typically earns its minerGasLimit * GasPremium
attoFIL, where effectivelyGasPremium = GasFeeCap - BaseFee
. Note thatGasPremium
is applied onGasLimit
, as opposed toGasUsed
, in order to make message selection for miners more straightforward.
func ComputeGasOverestimationBurn(gasUsed, gasLimit int64) (int64, int64) {
if gasUsed == 0 {
return 0, gasLimit
}
// over = gasLimit/gasUsed - 1 - 0.1
// over = min(over, 1)
// gasToBurn = (gasLimit - gasUsed) * over
// so to factor out division from `over`
// over*gasUsed = min(gasLimit - (11*gasUsed)/10, gasUsed)
// gasToBurn = ((gasLimit - gasUsed)*over*gasUsed) / gasUsed
over := gasLimit - (gasOveruseNum*gasUsed)/gasOveruseDenom
if over < 0 {
return gasLimit - gasUsed, 0
}
// if we want sharper scaling it goes here:
// over *= 2
if over > gasUsed {
over = gasUsed
}
// needs bigint, as it overflows in pathological case gasLimit > 2^32 gasUsed = gasLimit / 2
gasToBurn := big.NewInt(gasLimit - gasUsed)
gasToBurn = big.Mul(gasToBurn, big.NewInt(over))
gasToBurn = big.Div(gasToBurn, big.NewInt(gasUsed))
return gasLimit - gasUsed - gasToBurn.Int64(), gasToBurn.Int64()
}
func ComputeNextBaseFee(baseFee types.BigInt, gasLimitUsed int64, noOfBlocks int, epoch abi.ChainEpoch) types.BigInt {
// deta := gasLimitUsed/noOfBlocks - buildconstants.BlockGasTarget
// change := baseFee * deta / BlockGasTarget
// nextBaseFee = baseFee + change
// nextBaseFee = max(nextBaseFee, buildconstants.MinimumBaseFee)
var delta int64
if epoch > buildconstants.UpgradeSmokeHeight {
delta = gasLimitUsed / int64(noOfBlocks)
delta -= buildconstants.BlockGasTarget
} else {
delta = buildconstants.PackingEfficiencyDenom * gasLimitUsed / (int64(noOfBlocks) * buildconstants.PackingEfficiencyNum)
delta -= buildconstants.BlockGasTarget
}
// cap change at 12.5% (BaseFeeMaxChangeDenom) by capping delta
if delta > buildconstants.BlockGasTarget {
delta = buildconstants.BlockGasTarget
}
if delta < -buildconstants.BlockGasTarget {
delta = -buildconstants.BlockGasTarget
}
change := big.Mul(baseFee, big.NewInt(delta))
change = big.Div(change, big.NewInt(buildconstants.BlockGasTarget))
change = big.Div(change, big.NewInt(buildconstants.BaseFeeMaxChangeDenom))
nextBaseFee := big.Add(baseFee, change)
if big.Cmp(nextBaseFee, big.NewInt(buildconstants.MinimumBaseFee)) < 0 {
nextBaseFee = big.NewInt(buildconstants.MinimumBaseFee)
}
return nextBaseFee
}
Notes & Implications
-
State
reliable
-
Theory Audit
coming
-
Edit this section
-
section-systems.filecoin_vm.gas_fee.notes--implications
-
State
reliable
-
Theory Audit
coming
- Edit this section
-
section-systems.filecoin_vm.gas_fee.notes--implications
-
The
GasFeeCap
should always be higher than the network’sBaseFee
. If a message’sGasFeeCap
is lower than theBaseFee
, then the remainder comes from the miner (as a penalty). This penalty is applied to the miner because they have selected a message that pays less than the networkBaseFee
(i.e., does not cover the network costs). However, a miner might want to choose a message whoseGasFeeCap
is smaller than theBaseFee
if the same sender has another message in the message pool whoseGasFeeCap
is much bigger than theBaseFee
. Recall, that a miner should pick all the messages of a sender from the message pool, if more than one exists. The justification is that the increased fee of the second message will pay off the loss from the first. -
If
BaseFee + GasPremium
>GasFeeCap
, then the miner might not earn the entireGasLimit * GasPremium
as their reward. -
A message is hard-constrained to spending no more than
GasFeeCap * GasLimit
. From this amount, the networkBaseFee
is paid (burnt) first. After that, up toGasLimit * GasPremium
will be given to the miner as a reward. -
A message that runs out of gas fails with an “out of gas” exit code.
GasUsed * BaseFee
will still be burned (in this caseGasUsed = GasLimit
), and the miner will still be rewardedGasLimit * GasPremium
. This assumes thatGasFeeCap > BaseFee + GasPremium
. -
A low value for the
GasFeeCap
will likely cause the message to be stuck in the message pool, as it will not be attractive-enough in terms of profit for any miner to pick it and include it in a block. When this happens, there is a procedure to update theGasFeeCap
so that the message becomes more attractive to miners. The sender can push a new message into the message pool (which, by default, will propagate to other miners’ message pool) where: i) the identifier of the old and new messages is the same (e.g., sameNonce
) and ii) theGasPremium
is updated and increased by at least 25% of the previous value.
System Actors
-
State
reliable
-
Theory Audit
done
-
Edit this section
-
section-systems.filecoin_vm.sysactors
-
State
reliable
-
Theory Audit
done
- Edit this section
-
section-systems.filecoin_vm.sysactors
There are eleven (11) builtin System Actors in total, but not all of them interact with the VM. Each actor is identified by a Code ID (or CID).
There are four (4) system actors required for VM processing:
- the InitActor, which initializes new actors and records the network name, and
- the CronActor, a scheduler actor that runs critical functions at every epoch.
There are another two actors that interact with the VM:
- the AccountActor responsible for user accounts (non-singleton), and
- the RewardActor for block reward and token vesting (singleton).
The remaining seven (7) builtin System Actors that do not interact directly with the VM are the following:
StorageMarketActor
: responsible for managing storage and retrieval deals [ Market Actor Repo]StorageMinerActor
: actor responsible to deal with storage mining operations and collect proofs [ Storage Miner Actor Repo]MultisigActor
(or Multi-Signature Wallet Actor): responsible for dealing with operations involving the Filecoin wallet [ Multisig Actor Repo]PaymentChannelActor
: responsible for setting up and settling funds related to payment channels [ Paych Actor Repo]StoragePowerActor
: responsible for keeping track of the storage power allocated at each storage miner [ Storage Power Actor]VerifiedRegistryActor
: responsible for managing verified clients [ Verifreg Actor Repo]SystemActor
: general system actor [ System Actor Repo]
CronActor
-
State
reliable
-
Theory Audit
done
-
Edit this section
-
section-systems.filecoin_vm.sysactors.cronactor
-
State
reliable
-
Theory Audit
done
- Edit this section
-
section-systems.filecoin_vm.sysactors.cronactor
Built in to the genesis state, the CronActor
’s dispatch table invokes the StoragePowerActor
and StorageMarketActor
for them to maintain internal state and process deferred events. It could in principle invoke other actors after a network upgrade.
package cron
import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/cbor"
rtt "github.com/filecoin-project/go-state-types/rt"
cron0 "github.com/filecoin-project/specs-actors/actors/builtin/cron"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/specs-actors/v8/actors/builtin"
"github.com/filecoin-project/specs-actors/v8/actors/runtime"
)
// The cron actor is a built-in singleton that sends messages to other registered actors at the end of each epoch.
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.EpochTick,
}
}
func (a Actor) Code() cid.Cid {
return builtin.CronActorCodeID
}
func (a Actor) IsSingleton() bool {
return true
}
func (a Actor) State() cbor.Er {
return new(State)
}
var _ runtime.VMActor = Actor{}
//type ConstructorParams struct {
// Entries []Entry
//}
type ConstructorParams = cron0.ConstructorParams
type EntryParam = cron0.Entry
func (a Actor) Constructor(rt runtime.Runtime, params *ConstructorParams) *abi.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
entries := make([]Entry, len(params.Entries))
for i, e := range params.Entries {
entries[i] = Entry(e) // Identical
}
rt.StateCreate(ConstructState(entries))
return nil
}
// Invoked by the system after all other messages in the epoch have been processed.
func (a Actor) EpochTick(rt runtime.Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
var st State
rt.StateReadonly(&st)
for _, entry := range st.Entries {
code := rt.Send(entry.Receiver, entry.MethodNum, nil, abi.NewTokenAmount(0), &builtin.Discard{})
// Any error and return value are ignored.
if code.IsError() {
rt.Log(rtt.ERROR, "cron failed to send entry to %s, send error code %d", entry.Receiver, code)
}
}
return nil
}
InitActor
-
State
reliable
-
Theory Audit
done
-
Edit this section
-
section-systems.filecoin_vm.sysactors.initactor
-
State
reliable
-
Theory Audit
done
- Edit this section
-
section-systems.filecoin_vm.sysactors.initactor
The InitActor
has the power to create new actors, e.g., those that enter the system. It maintains a table resolving a public key and temporary actor addresses to their canonical ID-addresses. Invalid CIDs should not get committed to the state tree.
Note that the canonical ID address does not persist in case of chain re-organization. The actor address or public key survives chain re-organization.
package init
import (
addr "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/cbor"
"github.com/filecoin-project/go-state-types/exitcode"
init0 "github.com/filecoin-project/specs-actors/actors/builtin/init"
cid "github.com/ipfs/go-cid"
"github.com/filecoin-project/specs-actors/v8/actors/builtin"
"github.com/filecoin-project/specs-actors/v8/actors/runtime"
"github.com/filecoin-project/specs-actors/v8/actors/util/adt"
)
// The init actor uniquely has the power to create new actors.
// It maintains a table resolving pubkey and temporary actor addresses to the canonical ID-addresses.
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.Exec,
}
}
func (a Actor) Code() cid.Cid {
return builtin.InitActorCodeID
}
func (a Actor) IsSingleton() bool {
return true
}
func (a Actor) State() cbor.Er { return new(State) }
var _ runtime.VMActor = Actor{}
//type ConstructorParams struct {
// NetworkName string
//}
type ConstructorParams = init0.ConstructorParams
func (a Actor) Constructor(rt runtime.Runtime, params *ConstructorParams) *abi.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
st, err := ConstructState(adt.AsStore(rt), params.NetworkName)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
rt.StateCreate(st)
return nil
}
//type ExecParams struct {
// CodeCID cid.Cid `checked:"true"` // invalid CIDs won't get committed to the state tree
// ConstructorParams []byte
//}
type ExecParams = init0.ExecParams
//type ExecReturn struct {
// IDAddress addr.Address // The canonical ID-based address for the actor.
// RobustAddress addr.Address // A more expensive but re-org-safe address for the newly created actor.
//}
type ExecReturn = init0.ExecReturn
func (a Actor) Exec(rt runtime.Runtime, params *ExecParams) *ExecReturn {
rt.ValidateImmediateCallerAcceptAny()
callerCodeCID, ok := rt.GetActorCodeCID(rt.Caller())
builtin.RequireState(rt, ok, "no code for caller at %s", rt.Caller())
if !canExec(callerCodeCID, params.CodeCID) {
rt.Abortf(exitcode.ErrForbidden, "caller type %v cannot exec actor type %v", callerCodeCID, params.CodeCID)
}
// Compute a re-org-stable address.
// This address exists for use by messages coming from outside the system, in order to
// stably address the newly created actor even if a chain re-org causes it to end up with
// a different ID.
uniqueAddress := rt.NewActorAddress()
// Allocate an ID for this actor.
// Store mapping of pubkey or actor address to actor ID
var st State
var idAddr addr.Address
rt.StateTransaction(&st, func() {
var err error
idAddr, err = st.MapAddressToNewID(adt.AsStore(rt), uniqueAddress)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to allocate ID address")
})
// Create an empty actor.
rt.CreateActor(params.CodeCID, idAddr)
// Invoke constructor.
code := rt.Send(idAddr, builtin.MethodConstructor, builtin.CBORBytes(params.ConstructorParams), rt.ValueReceived(), &builtin.Discard{})
builtin.RequireSuccess(rt, code, "constructor failed")
return &ExecReturn{IDAddress: idAddr, RobustAddress: uniqueAddress}
}
func canExec(callerCodeID cid.Cid, execCodeID cid.Cid) bool {
switch execCodeID {
case builtin.StorageMinerActorCodeID:
if callerCodeID == builtin.StoragePowerActorCodeID {
return true
}
return false
case builtin.PaymentChannelActorCodeID, builtin.MultisigActorCodeID:
return true
default:
return false
}
}
RewardActor
-
State
reliable
-
Theory Audit
done
-
Edit this section
-
section-systems.filecoin_vm.sysactors.rewardactor
-
State
reliable
-
Theory Audit
done
- Edit this section
-
section-systems.filecoin_vm.sysactors.rewardactor
The RewardActor
is where unminted Filecoin tokens are kept. The actor distributes rewards directly to miner actors, where they are locked for vesting. The reward value used for the current epoch is updated at the end of an epoch through a cron tick.
package reward
import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/cbor"
"github.com/filecoin-project/go-state-types/exitcode"
rtt "github.com/filecoin-project/go-state-types/rt"
reward0 "github.com/filecoin-project/specs-actors/actors/builtin/reward"
reward6 "github.com/filecoin-project/specs-actors/v6/actors/builtin/reward"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/specs-actors/v8/actors/builtin"
"github.com/filecoin-project/specs-actors/v8/actors/runtime"
)
// PenaltyMultiplier is the factor miner penaltys are scaled up by
const PenaltyMultiplier = 3
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.AwardBlockReward,
3: a.ThisEpochReward,
4: a.UpdateNetworkKPI,
}
}
func (a Actor) Code() cid.Cid {
return builtin.RewardActorCodeID
}
func (a Actor) IsSingleton() bool {
return true
}
func (a Actor) State() cbor.Er {
return new(State)
}
var _ runtime.VMActor = Actor{}
func (a Actor) Constructor(rt runtime.Runtime, currRealizedPower *abi.StoragePower) *abi.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
if currRealizedPower == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "argument should not be nil")
return nil // linter does not understand abort exiting
}
st := ConstructState(*currRealizedPower)
rt.StateCreate(st)
return nil
}
//type AwardBlockRewardParams struct {
// Miner address.Address
// Penalty abi.TokenAmount // penalty for including bad messages in a block, >= 0
// GasReward abi.TokenAmount // gas reward from all gas fees in a block, >= 0
// WinCount int64 // number of reward units won, > 0
//}
type AwardBlockRewardParams = reward0.AwardBlockRewardParams
// Awards a reward to a block producer.
// This method is called only by the system actor, implicitly, as the last message in the evaluation of a block.
// The system actor thus computes the parameters and attached value.
//
// The reward includes two components:
// - the epoch block reward, computed and paid from the reward actor's balance,
// - the block gas reward, expected to be transferred to the reward actor with this invocation.
//
// The reward is reduced before the residual is credited to the block producer, by:
// - a penalty amount, provided as a parameter, which is burnt,
func (a Actor) AwardBlockReward(rt runtime.Runtime, params *AwardBlockRewardParams) *abi.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
priorBalance := rt.CurrentBalance()
if params.Penalty.LessThan(big.Zero()) {
rt.Abortf(exitcode.ErrIllegalArgument, "negative penalty %v", params.Penalty)
}
if params.GasReward.LessThan(big.Zero()) {
rt.Abortf(exitcode.ErrIllegalArgument, "negative gas reward %v", params.GasReward)
}
if priorBalance.LessThan(params.GasReward) {
rt.Abortf(exitcode.ErrIllegalState, "actor current balance %v insufficient to pay gas reward %v",
priorBalance, params.GasReward)
}
if params.WinCount <= 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "invalid win count %d", params.WinCount)
}
minerAddr, ok := rt.ResolveAddress(params.Miner)
if !ok {
rt.Abortf(exitcode.ErrNotFound, "failed to resolve given owner address")
}
// The miner penalty is scaled up by a factor of PenaltyMultiplier
penalty := big.Mul(big.NewInt(PenaltyMultiplier), params.Penalty)
totalReward := big.Zero()
var st State
rt.StateTransaction(&st, func() {
blockReward := big.Mul(st.ThisEpochReward, big.NewInt(params.WinCount))
blockReward = big.Div(blockReward, big.NewInt(builtin.ExpectedLeadersPerEpoch))
totalReward = big.Add(blockReward, params.GasReward)
currBalance := rt.CurrentBalance()
if totalReward.GreaterThan(currBalance) {
rt.Log(rtt.WARN, "reward actor balance %d below totalReward expected %d, paying out rest of balance", currBalance, totalReward)
totalReward = currBalance
blockReward = big.Sub(totalReward, params.GasReward)
// Since we have already asserted the balance is greater than gas reward blockReward is >= 0
builtin.RequireState(rt, blockReward.GreaterThanEqual(big.Zero()), "programming error, block reward %v below zero", blockReward)
}
st.TotalStoragePowerReward = big.Add(st.TotalStoragePowerReward, blockReward)
})
builtin.RequireState(rt, totalReward.LessThanEqual(priorBalance), "reward %v exceeds balance %v", totalReward, priorBalance)
// if this fails, we can assume the miner is responsible and avoid failing here.
rewardParams := builtin.ApplyRewardParams{
Reward: totalReward,
Penalty: penalty,
}
code := rt.Send(minerAddr, builtin.MethodsMiner.ApplyRewards, &rewardParams, totalReward, &builtin.Discard{})
if !code.IsSuccess() {
rt.Log(rtt.ERROR, "failed to send ApplyRewards call to the miner actor with funds: %v, code: %v", totalReward, code)
code := rt.Send(builtin.BurntFundsActorAddr, builtin.MethodSend, nil, totalReward, &builtin.Discard{})
if !code.IsSuccess() {
rt.Log(rtt.ERROR, "failed to send unsent reward to the burnt funds actor, code: %v", code)
}
}
return nil
}
// Changed since v0:
// - removed ThisEpochReward (unsmoothed)
//type ThisEpochRewardReturn struct {
// ThisEpochRewardSmoothed smoothing.FilterEstimate
// ThisEpochBaselinePower abi.StoragePower
//}
type ThisEpochRewardReturn = reward6.ThisEpochRewardReturn
// The award value used for the current epoch, updated at the end of an epoch
// through cron tick. In the case previous epochs were null blocks this
// is the reward value as calculated at the last non-null epoch.
func (a Actor) ThisEpochReward(rt runtime.Runtime, _ *abi.EmptyValue) *ThisEpochRewardReturn {
rt.ValidateImmediateCallerAcceptAny()
var st State
rt.StateReadonly(&st)
return &ThisEpochRewardReturn{
ThisEpochRewardSmoothed: st.ThisEpochRewardSmoothed,
ThisEpochBaselinePower: st.ThisEpochBaselinePower,
}
}
// Called at the end of each epoch by the power actor (in turn by its cron hook).
// This is only invoked for non-empty tipsets, but catches up any number of null
// epochs to compute the next epoch reward.
func (a Actor) UpdateNetworkKPI(rt runtime.Runtime, currRealizedPower *abi.StoragePower) *abi.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)
if currRealizedPower == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "argument should not be nil")
}
var st State
rt.StateTransaction(&st, func() {
prev := st.Epoch
// if there were null runs catch up the computation until
// st.Epoch == rt.CurrEpoch()
for st.Epoch < rt.CurrEpoch() {
// Update to next epoch to process null rounds
st.updateToNextEpoch(*currRealizedPower)
}
st.updateToNextEpochWithReward(*currRealizedPower)
// only update smoothed estimates after updating reward and epoch
st.updateSmoothedEstimates(st.Epoch - prev)
})
return nil
}
AccountActor
-
State
reliable
-
Theory Audit
done
-
Edit this section
-
section-systems.filecoin_vm.sysactors.accountactor
-
State
reliable
-
Theory Audit
done
- Edit this section
-
section-systems.filecoin_vm.sysactors.accountactor
The AccountActor
is responsible for user accounts. Account actors are not created by the InitActor
, but their constructor is called by the system. Account actors are created by sending a message to a public-key style address. The address must be BLS
or SECP
, or otherwise there should be an exit error. The account actor is updating the state tree with the new actor address.
package account
import (
addr "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/cbor"
"github.com/filecoin-project/go-state-types/exitcode"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/specs-actors/v8/actors/builtin"
"github.com/filecoin-project/specs-actors/v8/actors/runtime"
)
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
1: a.Constructor,
2: a.PubkeyAddress,
}
}
func (a Actor) Code() cid.Cid {
return builtin.AccountActorCodeID
}
func (a Actor) State() cbor.Er {
return new(State)
}
var _ runtime.VMActor = Actor{}
type State struct {
Address addr.Address
}
func (a Actor) Constructor(rt runtime.Runtime, address *addr.Address) *abi.EmptyValue {
// Account actors are created implicitly by sending a message to a pubkey-style address.
// This constructor is not invoked by the InitActor, but by the system.
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
switch address.Protocol() {
case addr.SECP256K1:
case addr.BLS:
break // ok
default:
rt.Abortf(exitcode.ErrIllegalArgument, "address must use BLS or SECP protocol, got %v", address.Protocol())
}
st := State{Address: *address}
rt.StateCreate(&st)
return nil
}
// Fetches the pubkey-type address from this actor.
func (a Actor) PubkeyAddress(rt runtime.Runtime, _ *abi.EmptyValue) *addr.Address {
rt.ValidateImmediateCallerAcceptAny()
var st State
rt.StateReadonly(&st)
return &st.Address
}
VM Interpreter - Message Invocation (Outside VM)
-
State
wip
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.interpreter
-
State
wip
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.interpreter
The VM interpreter orchestrates the execution of messages from a tipset on that tipset’s parent state, producing a new state and a sequence of message receipts. The CIDs of this new state and of the receipt collection are included in blocks from the subsequent epoch, which must agree about those CIDs in order to form a new tipset.
Every state change is driven by the execution of a message. The messages from all the blocks in a tipset must be executed in order to produce a next state. All messages from the first block are executed before those of second and subsequent blocks in the tipset. For each block, BLS-aggregated messages are executed first, then SECP signed messages.
Implicit messages
-
State
wip
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.interpreter.implicit-messages
-
State
wip
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.interpreter.implicit-messages
In addition to the messages explicitly included in each block, a few state changes at each epoch are made by implicit messages. Implicit messages are not transmitted between nodes, but constructed by the interpreter at evaluation time.
For each block in a tipset, an implicit message:
- invokes the block producer’s miner actor to process the (already-validated) election PoSt submission, as the first message in the block;
- invokes the reward actor to pay the block reward to the miner’s owner account, as the final message in the block;
For each tipset, an implicit message:
- invokes the cron actor to process automated checks and payments, as the final message in the tipset.
All implicit messages are constructed with a From
address being the distinguished system account actor.
They specify a gas price of zero, but must be included in the computation.
They must succeed (have an exit code of zero) in order for the new state to be computed.
Receipts for implicit messages are not included in the receipt list; only explicit messages have an
explicit receipt.
Gas payments
-
State
wip
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.interpreter.gas-payments
-
State
wip
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.interpreter.gas-payments
In most cases, the sender of a message pays the miner which produced the block including that message a gas fee for its execution.
The gas payments for each message execution are paid to the miner owner account immediately after that message is executed. There are no encumbrances to either the block reward or gas fees earned: both may be spent immediately.
Duplicate messages
-
State
wip
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.interpreter.duplicate-messages
-
State
wip
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.interpreter.duplicate-messages
Since different miners produce blocks in the same epoch, multiple blocks in a single tipset may include the same message (identified by the same CID). When this happens, the message is processed only the first time it is encountered in the tipset’s canonical order. Subsequent instances of the message are ignored and do not result in any state mutation, produce a receipt, or pay gas to the block producer.
The sequence of executions for a tipset is thus summarised:
- pay reward for first block
- process election post for first block
- messages for first block (BLS before SECP)
- pay reward for second block
- process election post for second block
- messages for second block (BLS before SECP, skipping any already encountered)
[... subsequent blocks ...]
- cron tick
Message validity and failure
-
State
wip
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_vm.interpreter.message-validity-and-failure
-
State
wip
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_vm.interpreter.message-validity-and-failure
Every message in a valid block can be processed and produce a receipt (note that block validity implies all messages are syntactically valid – see Message Syntax – and correctly signed). However, execution may or may not succeed, depending on the state to which the message is applied. If the execution of a message fails, the corresponding receipt will carry a non-zero exit code.
If a message fails due to a reason that can reasonably be attributed to the miner including a message that could never have succeeded in the parent state, or because the sender lacks funds to cover the maximum message cost, then the miner pays a penalty by burning the gas fee (rather than the sender paying fees to the block miner).
The only state changes resulting from a message failure are either:
- incrementing of the sending actor’s
CallSeqNum
, and payment of gas fees from the sender to the owner of the miner of the block including the message; or - a penalty equivalent to the gas fee for the failed message, burnt by the miner (sender’s
CallSeqNum
unchanged).
A message execution will fail if, in the immediately preceding state:
- the
From
actor does not exist in the state (miner penalized), - the
From
actor is not an account actor (miner penalized), - the
CallSeqNum
of the message does not match theCallSeqNum
of theFrom
actor (miner penalized), - the
From
actor does not have sufficient balance to cover the sum of the messageValue
plus the maximum gas cost,GasLimit * GasPrice
(miner penalized), - the
To
actor does not exist in state and theTo
address is not a pubkey-style address, - the
To
actor exists (or is implicitly created as an account) but does not have a method corresponding to the non-zeroMethodNum
, - deserialized
Params
is not an array of length matching the arity of theTo
actor’sMethodNum
method, - deserialized
Params
are not valid for the types specified by theTo
actor’sMethodNum
method, - the invoked method consumes more gas than the
GasLimit
allows, - the invoked method exits with a non-zero code (via
Runtime.Abort()
), or - any inner message sent by the receiver fails for any of the above reasons.
Note that if the To
actor does not exist in state and the address is a valid H(pubkey)
address,
it will be created as an account actor.
Blockchain
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain
The Filecoin Blockchain is a distributed virtual machine that achieves consensus, processes messages, accounts for storage, and maintains security in the Filecoin Protocol. It is the main interface linking various actors in the Filecoin system.
The Filecoin blockchain system includes:
- A Message Pool subsystem that nodes use to track and propagate messages that miners have declared they want to include in the blockchain.
- A Virtual Machine subsystem used to interpret and execute messages in order to update system state.
- A State Tree subsystem which manages the creation and maintenance of state trees (the system state) deterministically generated by the vm from a given subchain.
- A Chain Synchronisation (ChainSync) susbystem that tracks and propagates validated message blocks, maintaining sets of candidate chains on which the miner may mine and running syntactic validation on incoming blocks.
- A Storage Power Consensus subsystem which tracks storage state (i.e., Storage Subystem) for a given chain and helps the blockchain system choose subchains to extend and blocks to include in them.
The blockchain system also includes:
- A Chain Manager, which maintains a given chain’s state, providing facilities to other blockchain subsystems which will query state about the latest chain in order to run, and ensuring incoming blocks are semantically validated before inclusion into the chain.
- A Block Producer which is called in the event of a successful leader election in order to produce a new block that will extend the current heaviest chain before forwarding it to the syncer for propagation.
At a high-level, the Filecoin blockchain grows through successive rounds of leader election in which a number of miners are elected to generate a block, whose inclusion in the chain will earn them block rewards. Filecoin’s blockchain runs on storage power. That is, its consensus algorithm by which miners agree on which subchain to mine is predicated on the amount of storage backing that subchain. At a high-level, the Storage Power Consensus subsystem maintains a Power Table that tracks the amount of storage that storage miner actors have contributed to the network through Sector commitments and Proofs of Spacetime.
Blocks
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct
The Block is the main unit of the Filecoin blockchain, as is also the case with most other blockchains. Block messages are directly linked with Tipsets, which are groups of Block messages as detailed later on in this section. In the following we discuss the main structure of a Block message and the process of validating Block messages in the Filecoin blockchain.
Block
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.block
The Block is the main unit of the Filecoin blockchain.
The Block structure in the Filecoin blockchain is composed of: i) the Block Header, ii) the list of messages inside the block, and iii) the signed messages. This is represented inside the FullBlock
abstraction. The messages indicate the required set of changes to apply in order to arrive at a deterministic state of the chain.
The Lotus implementation of the block has the following struct
:
type FullBlock struct {
Header *BlockHeader
BlsMessages []*Message
SecpkMessages []*SignedMessage
}
Note
A block is functionally the same as a block header in the Filecoin protocol. While a block header contains Merkle links to the full system state, messages, and message receipts, a block can be thought of as the full set of this information (not just the Merkle roots, but rather the full data of the state tree, message tree, receipts tree, etc.). Because a full block is large in size, the Filecoin blockchain consists of block headers rather than full blocks. We often use the termsblock
andblock header
interchangeably.
A BlockHeader
is a canonical representation of a block. BlockHeaders are propagated between miner nodes. From the BlockHeader
message, a miner has all the required information to apply the associated FullBlock
’s state and update the chain. In order to be able to do this, the minimum set of information items that need to be included in the BlockHeader
are shown below and include among others: the miner’s address, the Ticket, the
Proof of SpaceTime, the CID of the parents where this block evolved from in the IPLD DAG, as well as the messages’ own CIDs.
The Lotus implementation of the block header has the following struct
s:
type BlockHeader struct {
Miner address.Address // 0 unique per block/miner
Ticket *Ticket // 1 unique per block/miner: should be a valid VRF
ElectionProof *ElectionProof // 2 unique per block/miner: should be a valid VRF
BeaconEntries []BeaconEntry // 3 identical for all blocks in same tipset
WinPoStProof []proof.PoStProof // 4 unique per block/miner
Parents []cid.Cid // 5 identical for all blocks in same tipset
ParentWeight BigInt // 6 identical for all blocks in same tipset
Height abi.ChainEpoch // 7 identical for all blocks in same tipset
ParentStateRoot cid.Cid // 8 identical for all blocks in same tipset
ParentMessageReceipts cid.Cid // 9 identical for all blocks in same tipset
Messages cid.Cid // 10 unique per block
BLSAggregate *crypto.Signature // 11 unique per block: aggrregate of BLS messages from above
Timestamp uint64 // 12 identical for all blocks in same tipset / hard-tied to the value of Height above
BlockSig *crypto.Signature // 13 unique per block/miner: miner signature
ForkSignaling uint64 // 14 currently unused/undefined
ParentBaseFee abi.TokenAmount // 15 identical for all blocks in same tipset: the base fee after executing parent tipset
validated bool // internal, true if the signature has been validated
}
type Ticket struct {
VRFProof []byte
}
type ElectionProof struct {
WinCount int64
VRFProof []byte
}
type BeaconEntry struct {
Round uint64
Data []byte
}
The BlockHeader
structure has to refer to the TicketWinner of the current round which ensures the correct winner is passed to
ChainSync.
func IsTicketWinner(vrfTicket []byte, mypow BigInt, totpow BigInt) bool
The Message
structure has to include the source (From
) and destination (To
) addresses, a Nonce
and the GasPrice
.
The Lotus implementation of the message has the following structure:
type Message struct {
Version uint64
To address.Address
From address.Address
Nonce uint64
Value abi.TokenAmount
GasLimit int64
GasFeeCap abi.TokenAmount
GasPremium abi.TokenAmount
Method abi.MethodNum
Params []byte
}
The message is also validated before it is passed to the chain synchronization logic:
func (m *Message) ValidForBlockInclusion(minGas int64, version network.Version) error {
if m.Version != 0 {
return xerrors.New("'Version' unsupported")
}
if m.To == address.Undef {
return xerrors.New("'To' address cannot be empty")
}
if m.To == buildconstants.ZeroAddress && version >= network.Version7 {
return xerrors.New("invalid 'To' address")
}
if !abi.AddressValidForNetworkVersion(m.To, version) {
return xerrors.New("'To' address protocol unsupported for network version")
}
if m.From == address.Undef {
return xerrors.New("'From' address cannot be empty")
}
if !abi.AddressValidForNetworkVersion(m.From, version) {
return xerrors.New("'From' address protocol unsupported for network version")
}
if m.Value.Int == nil {
return xerrors.New("'Value' cannot be nil")
}
if m.Value.LessThan(big.Zero()) {
return xerrors.New("'Value' field cannot be negative")
}
if m.Value.GreaterThan(TotalFilecoinInt) {
return xerrors.New("'Value' field cannot be greater than total filecoin supply")
}
if m.GasFeeCap.Int == nil {
return xerrors.New("'GasFeeCap' cannot be nil")
}
if m.GasFeeCap.LessThan(big.Zero()) {
return xerrors.New("'GasFeeCap' field cannot be negative")
}
if m.GasPremium.Int == nil {
return xerrors.New("'GasPremium' cannot be nil")
}
if m.GasPremium.LessThan(big.Zero()) {
return xerrors.New("'GasPremium' field cannot be negative")
}
if m.GasPremium.GreaterThan(m.GasFeeCap) {
return xerrors.New("'GasFeeCap' less than 'GasPremium'")
}
if m.GasLimit > buildconstants.BlockGasLimit {
return xerrors.Errorf("'GasLimit' field cannot be greater than a block's gas limit (%d > %d)", m.GasLimit, buildconstants.BlockGasLimit)
}
if m.GasLimit <= 0 {
return xerrors.Errorf("'GasLimit' field %d must be positive", m.GasLimit)
}
// since prices might vary with time, this is technically semantic validation
if m.GasLimit < minGas {
return xerrors.Errorf("'GasLimit' field cannot be less than the cost of storing a message on chain %d < %d", m.GasLimit, minGas)
}
return nil
}
Block syntax validation
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block.block-syntax-validation
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.block.block-syntax-validation
Syntax validation refers to validation that should be performed on a block and its messages without reference to outside information such as the parent state tree. This type of validation is sometimes called static validation.
An invalid block must not be transmitted or referenced as a parent.
A syntactically valid block header must decode into fields matching the definitions below, must be a valid CBOR PubSub BlockMsg
message and must have:
- between 1 and
5*ec.ExpectedLeaders
Parents
CIDs ifEpoch
is greater than zero (else emptyParents
), - a non-negative
ParentWeight
, - less than or equal to
BlockMessageLimit
number of messages, - aggregate message CIDs, encapsulated in the
MsgMeta
structure, serialized to theMessages
CID in the block header, - a
Miner
address which is an ID-address. The MinerAddress
in the block header should be present and correspond to a public-key address in the current chain state. - Block signature (
BlockSig
) that belongs to the public-key address retrieved for the Miner - a non-negative
Epoch
, - a positive
Timestamp
, - a
Ticket
with non-emptyVRFResult
, ElectionPoStOutput
containing:- a
Candidates
array with between 1 andEC.ExpectedLeaders
values (inclusive), - a non-empty
PoStRandomness
field, - a non-empty
Proof
field,
- a
- a non-empty
ForkSignal
field.
A syntactically valid full block must have:
- all referenced messages syntactically valid,
- all referenced parent receipts syntactically valid,
- the sum of the serialized sizes of the block header and included messages is no greater than
block.BlockMaxSize
, - the sum of the gas limit of all explicit messages is no greater than
block.BlockGasLimit
.
Note that validation of the block signature requires access to the miner worker address and public key from the parent tipset state, so signature validation forms part of semantic validation. Similarly, message signature validation requires lookup of the public key associated with each message’s From
account actor in the block’s parent state.
Block semantic validation
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block.block-semantic-validation
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.block.block-semantic-validation
Semantic validation refers to validation that requires reference to information outside the block header and messages themselves. Semantic validation relates to the parent tipset and state on which the block is built.
In order to proceed to semantic validation the FullBlock
must be assembled from the received block header retrieving its Filecoin messages. Block message CIDs can be retrieved from the network and be decoded into valid CBOR Message
/SignedMessage
.
In the Lotus implementation the semantic validation of a block is carried out by the Syncer
module:
func (syncer *Syncer) ValidateBlock(ctx context.Context, b *types.FullBlock, useCache bool) (err error) {
defer func() {
// b.Cid() could panic for empty blocks that are used in tests.
if rerr := recover(); rerr != nil {
err = xerrors.Errorf("validate block panic: %w", rerr)
return
}
}()
if useCache {
isValidated, err := syncer.store.IsBlockValidated(ctx, b.Cid())
if err != nil {
return xerrors.Errorf("check block validation cache %s: %w", b.Cid(), err)
}
if isValidated {
return nil
}
}
validationStart := build.Clock.Now()
defer func() {
stats.Record(ctx, metrics.BlockValidationDurationMilliseconds.M(metrics.SinceInMilliseconds(validationStart)))
log.Infow("block validation", "took", time.Since(validationStart), "height", b.Header.Height, "age", time.Since(time.Unix(int64(b.Header.Timestamp), 0)))
}()
ctx, span := trace.StartSpan(ctx, "validateBlock")
defer span.End()
if err := syncer.consensus.ValidateBlock(ctx, b); err != nil {
return err
}
if useCache {
if err := syncer.store.MarkBlockAsValidated(ctx, b.Cid()); err != nil {
return xerrors.Errorf("caching block validation %s: %w", b.Cid(), err)
}
}
return nil
}
Messages are retrieved through the Syncer
. There are the following two steps followed by the Syncer
:
1- Assemble a FullTipSet
populated with the single block received earlier. The Block’s ParentWeight
is greater than the one from the (first block of the) heaviest tipset.
2- Retrieve all tipsets from the received Block down to our chain. Validation is expanded to every block inside these tipsets. The validation should ensure that: - Beacon entires are ordered by their round number. - The Tipset Parents
CIDs match the fetched parent tipset through BlockSync.
A semantically valid block must meet all of the following requirements.
Parents
-Related
Parents
listed in lexicographic order of their header’sTicket
.ParentStateRoot
CID of the block matches the state CID computed from the parent Tipset.ParentState
matches the state tree produced by executing the parent tipset’s messages (as defined by the VM interpreter) against that tipset’s parent state.ParentMessageReceipts
identifying the receipt list produced by parent tipset execution, with one receipt for each unique message from the parent tipset. In other words, the Block’sParentMessageReceipts
CID matches the receipts CID computed from the parent tipset.ParentWeight
matches the weight of the chain up to and including the parent tipset.
Time-Related
Epoch
is greater than that of itsParents
, and- not in the future according to the node’s local clock reading of the current epoch,
- blocks with future epochs should not be rejected, but should not be evaluated (validated or included in a tipset) until the appropriate epoch
- not farther in the past than the soft finality as defined by SPC
Finality,
- this rule only applies when receiving new gossip blocks (i.e. from the current chain head), not when syncing to the chain for the first time.
- not in the future according to the node’s local clock reading of the current epoch,
- The
Timestamp
included is in seconds that:- must not be bigger than current time plus
ΑllowableClockDriftSecs
- must not be smaller than previous block’s
Timestamp
plusBlockDelay
(including null blocks) - is of the precise value implied by the genesis block’s timestamp, the network’s Βlock time and the Βlock’s
Epoch
.
- must not be bigger than current time plus
Miner
-Related
- The
Miner
is active in the storage power table in the parent tipset state. The Miner’s address is registered in theClaims
HAMT of the Power Actor - The
TipSetState
should be included for each tipset being validated.- Every Block in the tipset should belong to different a miner.
- The Actor associated with the message’s
From
address exists, is an account actor and its Nonce matches the message Nonce. - Valid proofs that the Miner proved access to sealed versions of the sectors it was challenged for are included. In order to achieve that:
- draw randomness for current epoch with
WinningPoSt
domain separation tag. - get list of sectors challanged in this epoch for this miner, based on the randomness drawn.
- draw randomness for current epoch with
- Miner is not slashed in
StoragePowerActor
.
Beacon
- & Ticket
-Related
- Valid
BeaconEntries
should be included:- Check that every one of the
BeaconEntries
is a signature of a message:previousSignature || round
signed using DRAND’s public key. - All entries between
MaxBeaconRoundForEpoch
down toprevEntry
(from previous tipset) should be included.
- Check that every one of the
- A
Ticket
derived from the minimum ticket from the parent tipset’s block headers,Ticket.VRFResult
validly signed by theMiner
actor’s worker account public key,
ElectionProof Ticket
is computed correctly by checking BLS signature using miner’s key. TheElectionProof
ticket should be a winning ticket.
Message- & Signature-Related
secp256k1
messages are correctly signed by their sending actor’s (From
) worker account key,- A
BLSAggregate
signature is included that signs the array of CIDs of all the BLS messages referenced by the block with their sending actor’s key. - A valid
Signature
over the block header’s fields from the block’sMiner
actor’s worker account public key is included. - For each message in
ValidForBlockInclusion()
the following hold:- Message fields
Version
,To
,From
,Value
,GasPrice
, andGasLimit
are correctly defined. - Message
GasLimit
is under the message minimum gas cost (derived from chain height and message length).
- Message fields
- For each message in
ApplyMessage
(that is before a message is executed), the following hold:- Basic gas and value checks in
checkMessage()
:- The Message
GasLimit
is bigger than zero. - The Message
GasPrice
andValue
are set.
- The Message
- The Message’s storage gas cost is under the message’s
GasLimit
. - The Message’s
Nonce
matches the nonce in the Actor retrieved from the message’sFrom
address. - The Message’s maximum gas cost (derived from its
GasLimit
,GasPrice
, andValue
) is under the balance of the Actor retrieved from message’sFrom
address. - The Message’s transfer
Value
is under the balance of the Actor retrieved from message’sFrom
address.
- Basic gas and value checks in
There is no semantic validation of the messages included in a block beyond validation of their signatures. If all messages included in a block are syntactically valid then they may be executed and produce a receipt.
A chain sync system may perform syntactic and semantic validation in stages in order to minimize unnecessary resource expenditure.
If all of the above tests are successful, the block is marked as validated. Ultimately, an invalid block must not be propagated further or validated as a parent node.
Tipset
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.tipset
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.tipset
Expected Consensus probabilistically elects multiple leaders in each epoch meaning a Filecoin chain may contain zero or multiple blocks at each epoch (one per elected miner). Blocks from the same epoch are assembled into tipsets. The VM Interpreter modifies the Filecoin state tree by executing all messages in a tipset (after de-duplication of identical messages included in more than one block).
Each block references a parent tipset and validates that tipset’s state, while proposing messages to be included for the current epoch. The state to which a new block’s messages apply cannot be known until that block is incorporated into a tipset. It is thus not meaningful to execute the messages from a single block in isolation: a new state tree is only known once all messages in that block’s tipset are executed.
A valid tipset contains a non-empty collection of blocks that have distinct miners and all specify identical:
Epoch
Parents
ParentWeight
StateRoot
ReceiptsRoot
The blocks in a tipset are canonically ordered by the lexicographic ordering of the bytes in each block’s ticket, breaking ties with the bytes of the CID of the block itself.
Due to network propagation delay, it is possible for a miner in epoch N+1 to omit valid blocks mined at epoch N from their parent tipset. This does not make the newly generated block invalid, it does however reduce its weight and chances of being part of the canonical chain in the protocol as defined by EC’s Chain Selection function.
Block producers are expected to coordinate how they select messages for inclusion in blocks in order to avoid duplicates and thus maximize their expected earnings from message fees (see Message Pool).
The main Tipset structure in the Lotus implementation includes the following:
type TipSet struct {
cids []cid.Cid
blks []*BlockHeader
height abi.ChainEpoch
}
Semantic validation of a Tipset includes the following checks.
func NewTipSet(blks []*BlockHeader) (*TipSet, error) {
if len(blks) == 0 {
return nil, xerrors.Errorf("NewTipSet called with zero length array of blocks")
}
sort.Slice(blks, tipsetSortFunc(blks))
var ts TipSet
ts.cids = []cid.Cid{blks[0].Cid()}
ts.blks = blks
for _, b := range blks[1:] {
if b.Height != blks[0].Height {
return nil, fmt.Errorf("cannot create tipset with mismatching heights")
}
if len(blks[0].Parents) != len(b.Parents) {
return nil, fmt.Errorf("cannot create tipset with mismatching number of parents")
}
for i, cid := range b.Parents {
if cid != blks[0].Parents[i] {
return nil, fmt.Errorf("cannot create tipset with mismatching parents")
}
}
ts.cids = append(ts.cids, b.Cid())
}
ts.height = blks[0].Height
return &ts, nil
}
Chain Manager
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager
The Chain Manager is a central component in the blockchain system. It tracks and updates competing subchains received by a given node in order to select the appropriate blockchain head: the latest block of the heaviest subchain it is aware of in the system.
In so doing, the chain manager is the central subsystem that handles bookkeeping for numerous other systems in a Filecoin node and exposes convenience methods for use by those systems, enabling systems to sample randomness from the chain for instance, or to see which block has been finalized most recently.
Chain Extension
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager.chain-extension
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager.chain-extension
Incoming block reception
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager.incoming-block-reception
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager.incoming-block-reception
For every incoming block, even if the incoming block is not added to the current heaviest tipset, the chain manager should add it to the appropriate subchain it is tracking, or keep track of it independently until either:
- it is able to add to the current heaviest subchain, through the reception of another block in that subchain, or
- it is able to discard it, as the block was mined before finality.
It is important to note that ahead of finality, a given subchain may be abandoned for another, heavier one mined in a given round. In order to rapidly adapt to this, the chain manager must maintain and update all subchains being considered up to finality.
Chain selection is a crucial component of how the Filecoin blockchain works. In brief, every chain has an associated weight accounting for the number of blocks mined on it and so the power (storage) they track. The full details of how selection works are provided in the Chain Selection section.
Notes/Recommendations:
- In order to make certain validation checks simpler, blocks should be indexed by height and by parent set. That way sets of blocks with a given height and common parents may be quickly queried.
- It may also be useful to compute and cache the resultant aggregate state of blocks in these sets, this saves extra state computation when checking which state root to start a block at when it has multiple parents.
- It is recommended that blocks are kept in the local datastore regardless of whether they are understood as the best tip at this point - this is to avoid having to refetch the same blocks in the future.
ChainTipsManager
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager.chaintipsmanager
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.struct.chain_manager.chaintipsmanager
The Chain Tips Manager is a subcomponent of Filecoin consensus that is responsible for tracking all live tips of the Filecoin blockchain, and tracking what the current ‘best’ tipset is.
// Returns the ticket that is at round 'r' in the chain behind 'head'
func TicketFromRound(head Tipset, r Round) {}
// Returns the tipset that contains round r (Note: multiple rounds' worth of tickets may exist within a single block due to losing tickets being added to the eventually successfully generated block)
func TipsetFromRound(head Tipset, r Round) {}
// GetBestTipset returns the best known tipset. If the 'best' tipset hasn't changed, then this
// will return the previous best tipset.
func GetBestTipset()
// Adds the losing ticket to the chaintips manager so that blocks can be mined on top of it
func AddLosingTicket(parent Tipset, t Ticket)
Block Producer
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer
Mining Blocks
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.mining-blocks
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.mining-blocks
A miner registered with the storage power actor may begin generating and checking election tickets if it has proven storage that meets the Minimum Miner Size threshold requirement.
In order to do so, the miner must be running chain validation, and be keeping track of the most recent blocks received. A miner’s new block will be based on parents from the previous epoch.
Block Creation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.block-creation
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.block-creation
Producing a block for epoch H
requires waiting for the beacon entry for that epoch and using it to run GenerateElectionProof
. If WinCount
≥ 1 (i.e., when the miner is elected), the same beacon entry is used to run WinningPoSt
. Armed by the ElectionProof
ticket (output of GenerateElectionProof
) and the WinningPoSt
proof, the miner can produce an new block.
See VM Interpreter for details of parent tipset evaluation, and Block for constraints on valid block header values.
To create a block, the eligible miner must compute a few fields:
Parents
- the CIDs of the parent tipset’s blocks.ParentWeight
- the parent chain’s weight (see Chain Selection).ParentState
- the CID of the state root from the parent tipset state evaluation (see the VM Interpreter).ParentMessageReceipts
- the CID of the root of an AMT containing receipts produced while computingParentState
.Epoch
- the block’s epoch, derived from theParents
epoch and the number of epochs it took to generate this block.Timestamp
- a Unix timestamp, in seconds, generated at block creation.BeaconEntries
- a set of drand entries generated since the last block (see Beacon Entries).Ticket
- a new ticket generated from that in the prior epoch (see Ticket Generation).Miner
- the block producer’s miner actor address.Messages
- The CID of aTxMeta
object containing message proposed for inclusion in the new block:- Select a set of messages from the mempool to include in the block, satisfying block size and gas limits
- Separate the messages into BLS signed messages and secpk signed messages
TxMeta.BLSMessages
: The CID of the root of an AMT comprising the bareUnsignedMessage
sTxMeta.SECPMessages
: the CID of the root of an AMT comprising theSignedMessage
s
BeaconEntries
: a list of beacon entries to derive randomness fromBLSAggregate
- The aggregated signature of all messages in the block that used BLS signing.Signature
- A signature with the miner’s worker account private key (must also match the ticket signature) over the block header’s serialized representation (with empty signature).ForkSignaling
- A uint64 flag used as part of signaling forks. Should be set to 0 by default.
Note that the messages to be included in a block need not be evaluated in order to produce a valid block. A miner may wish to speculatively evaluate the messages anyway in order to optimize for including messages which will succeed in execution and pay the most gas.
The block reward is not evaluated when producing a block. It is paid when the block is included in a tipset in the following epoch.
The block’s signature ensures integrity of the block after propagation, since unlike many PoW blockchains, a winning ticket is found independently of block generation.
Block Broadcast
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.block-broadcast
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.block-broadcast
An eligible miner propagates the completed block to the network using the
GossipSub /fil/blocks
topic and, assuming everything was done correctly,
the network will accept it and other miners will mine on top of it, earning the miner a block reward.
Miners should output their valid block as soon as it is produced, otherwise they risk other miners receiving the block after the EPOCH_CUTOFF and not including them in the current epoch.
Block Rewards
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.block-rewards
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.struct.block_producer.block-rewards
Block rewards are handled by the Reward Actor. Further details on the Block Reward are discussed in the Filecoin Token section and details about the Block Reward Collateral are discussed in the Miner Collaterals section.
Message Pool
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.message_pool
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.message_pool
The Message Pool, or mpool
or mempool
is a pool of messages in the Filecoin protocol. It acts as the interface between Filecoin nodes and the peer-to-peer network of other nodes used for off-chain message propagation. The message pool is used by nodes to maintain a set of messages they want to transmit to the Filecoin VM and add to the chain (i.e., add for “on-chain” execution).
In order for a message to end up in the blockchain it first has to be in the message pool. In reality, at least in the Lotus implementation of Filecoin, there is no central pool of messages stored somewhere. Instead, the message pool is an abstraction and is realised as a list of messages kept by every node in the network. Therefore, when a node puts a new message in the message pool, this message is propagated to the rest of the network using libp2p’s pubsub protocol, GossipSub. Nodes need to subscribe to the corresponding pubsub topic in order to receive messages.
Message propagation using GossipSub does not happen immediately and therefore, there is some lag before message pools at different nodes can be in sync. In practice, and given continuous streams of messages being added to the message pool and the delay to propagate messages, the message pool is never synchronised across all nodes in the network. This is not a deficiency of the system, as the message pool does not need to be synchronized across the network.
The message pool should have a maximum size defined to avoid DoS attacks, where nodes are spammed and run out of memory. The recommended size for the message pool is 5000 messages.
Message Propagation
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.message_pool.message_syncer
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.message_pool.message_syncer
The message pool has to interface with the libp2p pubsub
GossipSub protocol. This is because messages are propagated over
GossipSub the corresponding /fil/msgs/
topic. Every
Message is announced in the corresponding /fil/msgs/
topic by any node participating in the network.
There are two main pubsub topics related to messages and blocks: i) the /fil/msgs/
topic that carries messages and, ii) the /fil/blocks/
topic that carries blocks. The /fil/msgs/
topic is linked to the mpool
. The process is as follows:
- When a client wants to send a message in the Filecoin network, they publish the message to the
/fil/msgs/
topic. - The message propagates to all other nodes in the network using GossipSub and eventually ends up in the
mpool
of all miners. - Depending on cryptoeconomic rules, some miner will eventually pick the message from the
mpool
(together with other messages) and include it in a block. - The miner publishes the newly-mined block in the
/fil/blocks/
pubsub topic and the block propagates to all nodes in the network (including the nodes that published the messages included in this block).
Nodes must check that incoming messages are valid, that is, that they have a valid signature. If the message is not valid it should be dropped and must not be forwarded.
The updated, hardened version of the GossipSub protocol includes a number of attack mitigation strategies. For instance, when a node receives an invalid message it assigns a negative score to the sender peer. Peer scores are not shared with other nodes, but are rather kept locally by every peer for all other peers it is interacting with. If a peer’s score drops below a threshold it is excluded from the scoring peer’s mesh. We discuss more details on these settings in the GossipSub section. The full details can be found in the GossipSub Specification.
NOTES:
- Fund Checking: It is important to note that the
mpool
logic is not checking whether there are enough funds in the account of the message issuer. This is checked by the miner before including a message in a block. - Message Sorting: Messages are sorted in the
mpool
of miners as they arrive according to cryptoeconomic rules followed by the miner and in order for the miner to compose the next block.
Message Storage
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.message_pool.message_storage
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.message_pool.message_storage
As mentioned earlier, there is no central pool where messages are included. Instead, every node must have allocated memory for incoming messages.
ChainSync
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync
Blockchain synchronization (“sync”) is a key part of a blockchain system. It handles retrieval and propagation of blocks and messages, and thus is in charge of distributed state replication. As such, this process is security critical – problems with state replication can have severe consequences to the operation of a blockchain.
When a node first joins the network it discovers peers (through the peer discovery discussed above) and joins the /fil/blocks
and /fil/msgs
GossipSub topics. It listens to new blocks being propagated by other nodes. It picks one block as the BestTargetHead
and starts syncing the blockchain up to this height from the TrustedCheckpoint
, which by default is the GenesisBlock
or GenesisCheckpoint
. In order to pick the BestTargetHead
the peer is comparing a combination of height and weight - the higher these values the higher the chances of the block being on the main chain. If there are two blocks on the same height, the peer should choose the one with the higher weight. Once the peer chooses the BestTargetHead
it uses the BlockSync protocol to fetch the blocks and get to the current height. From that point on it is in CHAIN_FOLLOW
mode, where it uses GossipSub to receive new blocks, or Bitswap if it hears about a block that it has not received through GossipSub.
ChainSync Overview
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync.chainsync-overview
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync.chainsync-overview
ChainSync
is the protocol Filecoin uses to synchronize its blockchain. It is
specific to Filecoin’s choices in state representation and consensus rules,
but is general enough that it can serve other blockchains. ChainSync
is a
group of smaller protocols, which handle different parts of the sync process.
Chain synchronisation is generally needed in the following cases:
- when a node first joins the network and needs to get to the current state before validating or extending the chain.
- when a node has fell out of sync, e.g., due to a brief disconnection.
- during normal operation in order to keep up with the latest messages and blocks.
There are three main protocols used to achieve synchronisation for these three cases.
GossipSub
is the libp2p pubsub protocol used to propagate messages and blocks. It is mainly used in the third process above when a node needs to stay in sync with new blocks being produced and propagated.BlockSync
is used to synchronise specific parts of the chain, that is from and to a specific height.hello
protocol, which is used when two peers first “meet” (i.e., first time they connect to each other). According to the protocol, they exchange their chain heads.
In addition, Bitswap
is used to request and receive blocks, when a node is synchonized (“caught up”), but GossipSub has failed to deliver some blocks to a node. Finally, GraphSync
can be used to fetch parts of the blockchain as a more efficient version of Bitswap
.
Filecoin nodes are libp2p nodes, and therefore may run a variety of other protocols. As with anything else in Filecoin, nodes MAY opt to use additional protocols to achieve the results. That said, nodes MUST implement the version of ChainSync
as described in this spec in order to be considered implementations of Filecoin.
Terms and Concepts
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync.terms-and-concepts
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync.terms-and-concepts
LastCheckpoint
the last hard social-consensus oriented checkpoint thatChainSync
is aware of. This consensus checkpoint defines the minimum finality, and a minimum of history to build on.ChainSync
takesLastCheckpoint
on faith, and builds on it, never switching away from its history.TargetHeads
a list ofBlockCIDs
that represent blocks at the fringe of block production. These are the newest and best blocksChainSync
knows about. They are “target” heads becauseChainSync
will try to sync to them. This list is sorted by “likelihood of being the best chain”. At this point this is simply realized throughChainWeight
.BestTargetHead
the single best chain headBlockCID
to try to sync to. This is the first element ofTargetHeads
ChainSync State Machine
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync.chainsync-state-machine
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync.chainsync-state-machine
At a high level, ChainSync
does the following:
- Part 1: Verify internal state (
INIT
state below)- SHOULD verify data structures and validate local chain
- Resource expensive verification MAY be skipped at nodes’ own risk
- Part 2: Bootstrap to the network (
BOOTSTRAP
)- Step 1. Bootstrap to the network, and acquire a “secure enough” set of peers (more details below)
- Step 2. Bootstrap to the
GossipSub
channels
- Part 3: Synchronize trusted checkpoint state (
SYNC_CHECKPOINT
)- Step 1. Start with a
TrustedCheckpoint
(defaults toGenesisCheckpoint
). TheTrustedCheckpoint
SHOULD NOT be verified in software, it SHOULD be verified by operators. - Step 2. Get the block it points to, and that block’s parents
- Step 3. Fetch the
StateTree
- Step 1. Start with a
- Part 4: Catch up to the chain (
CHAIN_CATCHUP
)- Step 1. Maintain a set of
TargetHeads
(BlockCIDs
), and select theBestTargetHead
from it - Step 2. Synchronize to the latest heads observed, validating blocks towards them (requesting intermediate points)
- Step 3. As validation progresses,
TargetHeads
andBestTargetHead
will likely change, as new blocks at the production fringe will arrive, and some target heads or paths to them may fail to validate. - Step 4. Finish when node has “caught up” with
BestTargetHead
(retrieved all the state, linked to local chain, validated all the blocks, etc).
- Step 1. Maintain a set of
- Part 5: Stay in sync, and participate in block propagation (
CHAIN_FOLLOW
)- Step 1. If security conditions change, go back to Part 4 (
CHAIN_CATCHUP
) - Step 2. Receive, validate, and propagate received
Blocks
- Step 3. Now with greater certainty of having the best chain, finalize Tipsets, and advance chain state.
- Step 1. If security conditions change, go back to Part 4 (
ChainSync
uses the following conceptual state machine. Since this is a conceptual state machine,
implementations MAY deviate from implementing precisely these states, or dividing them strictly.
Implementations MAY blur the lines between the states. If so, implementations MUST ensure security
of the altered protocol.
Peer Discovery
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync.peer-discovery
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync.peer-discovery
Peer discovery is a critical part of the overall architecture. Taking this wrong can have severe consequences for the operation of the protocol. The set of peers a new node initially connects to when joining the network may completely dominate the node’s awareness of other peers, and therefore the view of the state of the network that the node has.
Peer discovery can be driven by arbitrary external means and is pushed outside the core functionality of the protocols involved in ChainSync (i.e., GossipSub, Bitswap, BlockSync). This allows for orthogonal, application-driven development and no external dependencies for the protocol implementation. Nonetheless, the GossipSub protocol supports: i) Peer Exchange, and ii) Explicit Peering Agreements.
Peer Exchange
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync.peer-exchange
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync.peer-exchange
Peer Exchange allows applications to bootstrap from a known set of peers without an external peer discovery mechanism. This process can be realized either through bootstrap nodes or other normal peers. Bootstrap nodes must be maintained by system operators and must be configured correctly. They have to be stable and operate independently of protocol constructions, such as the GossipSub mesh construction, that is, bootstrap nodes do not maintain connections to the mesh.
For more details on Peer Exchange please refer to the GossipSub specification.
Explicit Peering Agreements
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync.explicit-peering-agreements
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync.explicit-peering-agreements
With explicit peering agreements, the operators must specify a list of peers which nodes should connect to when joining. The protocol must have options available for these to be specified. For every explicit peer, the router must establish and maintain a bidirectional (reciprocal) connection.
Progressive Block Validation
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_blockchain.chainsync.progressive-block-validation
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_blockchain.chainsync.progressive-block-validation
-
Blocks may be validated in progressive stages, in order to minimize resource expenditure.
-
Validation computation is considerable, and a serious DOS attack vector.
-
Secure implementations must carefully schedule validation and minimize the work done by pruning blocks without validating them fully.
-
ChainSync
SHOULD keep a cache of unvalidated blocks (ideally sorted by likelihood of belonging to the chain), and delete unvalidated blocks when they are passed byFinalityTipset
, or whenChainSync
is under significant resource load. -
These stages can be used partially across many blocks in a candidate chain, in order to prune out clearly bad blocks long before actually doing the expensive validation work.
-
Progressive Stages of Block Validation
- BV0 - Syntax: Serialization, typing, value ranges.
- BV1 - Plausible Consensus: Plausible miner, weight, and epoch values (e.g from chain state at
b.ChainEpoch - consensus.LookbackParameter
). - BV2 - Block Signature
- BV3 - Beacon entries: Valid random beacon entries have been inserted in the block (see beacon entry validation).
- BV4 - ElectionProof: A valid election proof was generated.
- BV5 - WinningPoSt: Correct PoSt generated.
- BV6 - Chain ancestry and finality: Verify block links back to trusted chain, not prior to finality.
- BV7 - Message Signatures:
- BV8 - State tree: Parent tipset message execution produces the claimed state tree root and receipts.
Storage Power Consensus
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus
The Storage Power Consensus (SPC) subsystem is the main interface which enables Filecoin nodes to agree on the state of the system. Storage Power Consensus accounts for individual storage miners’ effective power over consensus in given chains in its Power Table. It also runs Expected Consensus (the underlying consensus algorithm in use by Filecoin), enabling storage miners to run leader election and generate new blocks updating the state of the Filecoin system.
Succinctly, the SPC subsystem offers the following services:
-
Access to the Power Table for every subchain, accounting for individual storage miner power and total power on-chain.
-
Access to Expected Consensus for individual storage miners, enabling:
- Access to verifiable randomness Tickets as provided by drand for the rest of the protocol.
- Running Leader Election to produce new blocks.
- Running Chain Selection across subchains using EC’s weighting function.
- Identification of the most recently finalized tipset, for use by all protocol participants.
Distinguishing between storage miners and block miners
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.distinguishing-between-storage-miners-and-block-miners
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.distinguishing-between-storage-miners-and-block-miners
There are two ways to earn Filecoin tokens in the Filecoin network:
- By participating in the Storage Market as a storage provider and being paid by clients for file storage deals.
- By mining new blocks, extending the blockchain, securing the Filecoin consensus mechanism, and running smart contracts to perform state updates as a Storage Miner.
There are two types of “miners” (storage and block miners) to be distinguished. Leader Election in Filecoin is predicated on a miner’s storage power. Thus, while all block miners will be storage miners, the reverse is not necessarily true.
However, given Filecoin’s “useful Proof-of-Work” is achieved through file storage ( PoRep and PoSt), there is little overhead cost for storage miners to participate in leader election. Such a Storage Miner Actor need only register with the Storage Power Actor in order to participate in Expected Consensus and mine blocks.
On Power
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.on-power
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.on-power
Quality-adjusted power is assigned to every sector as a static function of its Sector Quality which includes: i) the Sector Spacetime, which is the product of the sector size and the promised storage duration, ii) the Deal Weight that converts spacetime occupied by deals into consensus power, iii) the Deal Quality Multiplier that depends on the type of deal done over the sector (i.e., CC, Regular Deal or Verified Client Deal), and finally, iv) the Sector Quality Multiplier, which is an average of deal quality multipliers weighted by the amount of spacetime each type of deal occupies in the sector.
The Sector Quality is a measure that maps size, duration and the type of active deals in a sector during its lifetime to its impact on power and reward distribution.
The quality of a sector depends on the deals made over the data inside the sector. There are generally three types of deals: the Committed Capacity (CC), where there is effectively no deal and the miner is storing arbitrary data inside the sector, the Regular Deals, where a miner and a client agree on a price in the market and the Verified Client deals, which give more power to the sector. We refer the reader to the Sector and Sector Quality sections for details on Sector Types and Sector Quality, the Verified Clients section for more details on what a verified client is, and the CryptoEconomics section for specific parameter values on the Deal Weights and Quality Multipliers.
Quality-Adjusted Power is the number of votes a miner has in the Secret Leader Election and has been defined to increase linearly with the useful storage that a miner has committed to the network.
More precisely, we have the following definitions:
- Raw-byte power: the size of a sector in bytes.
- Quality-adjusted power: the consensus power of stored data on the network, equal to Raw-byte power multiplied by the Sector Quality Multiplier.
Beacon Entries
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.beacon-entries
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.beacon-entries
The Filecoin protocol uses randomness produced by a drand beacon to seed unbiasable randomness seeds for use in the chain (see randomness).
In turn these random seeds are used by:
- The sector_sealer as SealSeeds to bind sector commitments to a given subchain.
- The post_generator as PoStChallenges to prove sectors remain committed as of a given block.
- The Storage Power subsystem as randomness in Secret Leader Election to determine how often a miner is chosen to mine a new block.
This randomness may be drawn from various Filecoin chain epochs by the respective protocols that use them according to their security requirements.
It is important to note that a given Filecoin network and a given drand network
need not have the same round time, i.e. blocks may be generated faster or slower
by Filecoin than randomness is generated by drand. For instance, if the drand
beacon is producing randomness twice as fast as Filecoin produces blocks, we
might expect two random values to be produced in a Filecoin epoch, conversely if
the Filecoin network is twice as fast as drand, we might expect a random value
every other Filecoin epoch. Accordingly, depending on both networks'
configurations, certain Filecoin blocks could contain multiple or no drand
entries.
Furthermore, it must be that any call to the drand network for a new randomness
entry during an outage should be blocking, as noted with the drand.Public()
calls below.
In all cases, Filecoin blocks must include all drand beacon outputs generated
since the last epoch in the BeaconEntries
field of the block header. Any use
of randomness from a given Filecoin epoch should use the last valid drand entry
included in a Filecoin block. This is shown below.
Get drand randomness for VM
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.get-drand-randomness-for-vm
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.get-drand-randomness-for-vm
For operations such as PoRep creation, proof validations, or anything that requires randomness for the Filecoin VM, there should be a method that extracts the drand entry from the chain correctly. Note that the round may span multiple filecoin epochs if drand is slower; the lowest epoch number block will contain the requested beacon entry. Similarly, if there has been null rounds where the beacon should have been inserted, we need to iterate on the chain to find where the entry is inserted. Specifically, the next non-null block must contain the drand entry requested by definition.
Fetch randomness from drand network
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.fetch-randomness-from-drand-network
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.fetch-randomness-from-drand-network
When mining, a miner can fetch entries from the drand network to include them in the new block.
type DrandBeacon struct {
isChained bool
client dclient.Client
pubkey kyber.Point
// seconds
interval time.Duration
drandGenTime uint64
filGenTime uint64
filRoundTime uint64
scheme *dcrypto.Scheme
localCache *lru.Cache[uint64, *types.BeaconEntry]
}
func BeaconEntriesForBlock(ctx context.Context, bSchedule Schedule, nv network.Version, epoch abi.ChainEpoch, parentEpoch abi.ChainEpoch, prev types.BeaconEntry) ([]types.BeaconEntry, error) {
// When we have "chained" beacons, two entries at a fork are required.
parentBeacon := bSchedule.BeaconForEpoch(parentEpoch)
currBeacon := bSchedule.BeaconForEpoch(epoch)
if parentBeacon != currBeacon && currBeacon.IsChained() {
// Fork logic
round := currBeacon.MaxBeaconRoundForEpoch(nv, epoch)
out := make([]types.BeaconEntry, 2)
rch := currBeacon.Entry(ctx, round-1)
res := <-rch
if res.Err != nil {
return nil, xerrors.Errorf("getting entry %d returned error: %w", round-1, res.Err)
}
out[0] = res.Entry
rch = currBeacon.Entry(ctx, round)
res = <-rch
if res.Err != nil {
return nil, xerrors.Errorf("getting entry %d returned error: %w", round, res.Err)
}
out[1] = res.Entry
return out, nil
}
start := build.Clock.Now()
maxRound := currBeacon.MaxBeaconRoundForEpoch(nv, epoch)
// We don't expect this to ever be the case
if maxRound == prev.Round {
return nil, nil
}
// TODO: this is a sketchy way to handle the genesis block not having a beacon entry
if prev.Round == 0 {
prev.Round = maxRound - 1
}
var out []types.BeaconEntry
for currEpoch := epoch; currEpoch > parentEpoch; currEpoch-- {
currRound := currBeacon.MaxBeaconRoundForEpoch(nv, currEpoch)
rch := currBeacon.Entry(ctx, currRound)
select {
case resp := <-rch:
if resp.Err != nil {
return nil, xerrors.Errorf("beacon entry request returned error: %w", resp.Err)
}
out = append(out, resp.Entry)
case <-ctx.Done():
return nil, xerrors.Errorf("context timed out waiting on beacon entry to come back for epoch %d: %w", epoch, ctx.Err())
}
}
log.Debugw("fetching beacon entries", "took", build.Clock.Since(start), "numEntries", len(out))
reverse(out)
return out, nil
}
func (db *DrandBeacon) MaxBeaconRoundForEpoch(nv network.Version, filEpoch abi.ChainEpoch) uint64 {
// TODO: sometimes the genesis time for filecoin is zero and this goes negative
latestTs := ((uint64(filEpoch) * db.filRoundTime) + db.filGenTime) - db.filRoundTime
if nv <= network.Version15 {
return db.maxBeaconRoundV1(latestTs)
}
return db.maxBeaconRoundV2(latestTs)
}
Validating Beacon Entries on block reception
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.validating-beacon-entries-on-block-reception
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.validating-beacon-entries-on-block-reception
A Filecoin chain will contain the entirety of the beacon’s output from the Filecoin genesis to the current block.
Given their role in leader election and other critical protocols in Filecoin, a block’s beacon entries must be validated for every block. See
drand for details. This can be done by ensuring every beacon entry is a valid signature over the prior one in the chain, using drand’s
Verify
endpoint as follows:
func ValidateBlockValues(bSchedule Schedule, nv network.Version, h *types.BlockHeader, parentEpoch abi.ChainEpoch,
prevEntry types.BeaconEntry) error {
parentBeacon := bSchedule.BeaconForEpoch(parentEpoch)
currBeacon := bSchedule.BeaconForEpoch(h.Height)
// When we have "chained" beacons, two entries at a fork are required.
if parentBeacon != currBeacon && currBeacon.IsChained() {
if len(h.BeaconEntries) != 2 {
return xerrors.Errorf("expected two beacon entries at beacon fork, got %d", len(h.BeaconEntries))
}
err := currBeacon.VerifyEntry(h.BeaconEntries[1], h.BeaconEntries[0].Data)
if err != nil {
return xerrors.Errorf("beacon at fork point invalid: (%v, %v): %w",
h.BeaconEntries[1], h.BeaconEntries[0], err)
}
return nil
}
maxRound := currBeacon.MaxBeaconRoundForEpoch(nv, h.Height)
// We don't expect to ever actually meet this condition
if maxRound == prevEntry.Round {
if len(h.BeaconEntries) != 0 {
return xerrors.Errorf("expected not to have any beacon entries in this block, got %d", len(h.BeaconEntries))
}
return nil
}
if len(h.BeaconEntries) == 0 {
return xerrors.Errorf("expected to have beacon entries in this block, but didn't find any")
}
// We skip verifying the genesis entry when randomness is "chained".
if currBeacon.IsChained() && prevEntry.Round == 0 {
return nil
}
last := h.BeaconEntries[len(h.BeaconEntries)-1]
if last.Round != maxRound {
return xerrors.Errorf("expected final beacon entry in block to be at round %d, got %d", maxRound, last.Round)
}
// If the beacon is UNchained, verify that the block only includes the rounds we want for the epochs in between parentEpoch and h.Height
// For chained beacons, you must have all the rounds forming a valid chain with prevEntry, so we can skip this step
if !currBeacon.IsChained() {
// Verify that all other entries' rounds are as expected for the epochs in between parentEpoch and h.Height
for i, e := range h.BeaconEntries {
correctRound := currBeacon.MaxBeaconRoundForEpoch(nv, parentEpoch+abi.ChainEpoch(i)+1)
if e.Round != correctRound {
return xerrors.Errorf("unexpected beacon round %d, expected %d for epoch %d", e.Round, correctRound, parentEpoch+abi.ChainEpoch(i))
}
}
}
// Verify the beacon entries themselves
for i, e := range h.BeaconEntries {
if err := currBeacon.VerifyEntry(e, prevEntry.Data); err != nil {
return xerrors.Errorf("beacon entry %d (%d - %x (%d)) was invalid: %w", i, e.Round, e.Data, len(e.Data), err)
}
prevEntry = e
}
return nil
}
Tickets
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.tickets
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.tickets
Filecoin block headers also contain a single “ticket” generated from its epoch’s beacon entry. Tickets are used to break ties in the Fork Choice Rule, for forks of equal weight.
Whenever comparing tickets in Filecoin, the comparison is that of the ticket’s VRF Digest’s bytes.
Randomness Ticket generation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.randomness-ticket-generation
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.randomness-ticket-generation
At a Filecoin epoch n
, a new ticket is generated using the appropriate beacon entry for epoch n
.
The miner runs the beacon entry through a Verifiable Random Function (VRF) to get a new unique ticket. The beacon entry is prepended with the ticket domain separation tag and concatenated with the miner actor address (to ensure miners using the same worker keys get different tickets).
To generate a ticket for a given epoch n:
randSeed = GetRandomnessFromBeacon(n)
newTicketRandomness = VRF_miner(H(TicketProdDST || index || Serialization(randSeed, minerActorAddress)))
Verifiable Random Functions are used for ticket generation.
Ticket Validation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.ticket-validation
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.ticket-validation
Each Ticket should be generated from the prior one in the VRF-chain and verified accordingly.
Minimum Miner Size
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.minimum-miner-size
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.minimum-miner-size
In order to secure Storage Power Consensus, the system defines a minimum miner size required to participate in consensus.
Specifically, miners must have either at least MIN_MINER_SIZE_STOR
of power (i.e. storage power currently used in storage deals) in order to participate in leader election. If no miner has MIN_MINER_SIZE_STOR
or more power, miners with at least as much power as the smallest miner in the top MIN_MINER_SIZE_TARG
of miners (sorted by storage power) will be able to participate in leader election. In plain english, take MIN_MINER_SIZE_TARG = 3
for instance, this means that miners with at least as much power as the 3rd largest miner will be eligible to participate in consensus.
Miners smaller than this cannot mine blocks and earn block rewards in the network. Their power will still be counted in the total network (raw or claimed) storage power, even though their power will not be counted as votes for leader election. However, it is important to note that such miners can still have their power faulted and be penalized accordingly.
Accordingly, to bootstrap the network, the genesis block must include miners, potentially just CommittedCapacity sectors, to initiate the network.
The MIN_MINER_SIZE_TARG
condition will not be used in a network in which any miner has more than MIN_MINER_SIZE_STOR
power. It is nonetheless defined to ensure liveness in small networks (e.g. close to genesis or after large power drops).
Storage Power Actor
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor
StoragePowerActorState
implementation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.storagepoweractorstate-implementation
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.storagepoweractorstate-implementation
type State struct {
TotalRawBytePower abi.StoragePower
// TotalBytesCommitted includes claims from miners below min power threshold
TotalBytesCommitted abi.StoragePower
TotalQualityAdjPower abi.StoragePower
// TotalQABytesCommitted includes claims from miners below min power threshold
TotalQABytesCommitted abi.StoragePower
TotalPledgeCollateral abi.TokenAmount
// These fields are set once per epoch in the previous cron tick and used
// for consistent values across a single epoch's state transition.
ThisEpochRawBytePower abi.StoragePower
ThisEpochQualityAdjPower abi.StoragePower
ThisEpochPledgeCollateral abi.TokenAmount
ThisEpochQAPowerSmoothed smoothing.FilterEstimate
MinerCount int64
// Number of miners having proven the minimum consensus power.
MinerAboveMinPowerCount int64
// A queue of events to be triggered by cron, indexed by epoch.
CronEventQueue cid.Cid // Multimap, (HAMT[ChainEpoch]AMT[CronEvent])
// First epoch in which a cron task may be stored.
// Cron will iterate every epoch between this and the current epoch inclusively to find tasks to execute.
FirstCronEpoch abi.ChainEpoch
// Claimed power for each miner.
Claims cid.Cid // Map, HAMT[address]Claim
ProofValidationBatch *cid.Cid // Multimap, (HAMT[Address]AMT[SealVerifyInfo])
}
StoragePowerActor
implementation
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.storagepoweractor-implementation
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.storagepoweractor-implementation
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.CreateMiner,
3: a.UpdateClaimedPower,
4: a.EnrollCronEvent,
5: a.CronTick,
6: a.UpdatePledgeTotal,
7: nil, // deprecated
8: a.SubmitPoRepForBulkVerify,
9: a.CurrentTotalPower,
}
}
func (a Actor) Constructor(rt Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
st, err := ConstructState(adt.AsStore(rt))
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to construct state")
rt.StateCreate(st)
return nil
}
The Power Table
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.the-power-table
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.the-power-table
The portion of blocks a given miner generates through leader election in EC (and so the block rewards they earn) is proportional to their Quality-Adjusted Power Fraction
over time. That is, a miner whose quality adjusted power represents 1% of total quality adjusted power on the network should mine 1% of blocks on expectation.
SPC provides a power table abstraction which tracks miner power (i.e. miner storage in relation to network storage) over time. The power table is updated for new sector commitments (incrementing miner power), for failed PoSts (decrementing miner power) or for other storage and consensus faults.
Sector ProveCommit is the first time power is proven to the network and hence power is first added upon successful sector ProveCommit. Power is also added when a sector is declared as recovered. Miners are expected to prove over all their sectors that contribute to their power.
Power is decremented when a sector expires, when a sector is declared or detected to be faulty, or when it is terminated through miner invocation. Miners can also extend the lifetime of a sector through ExtendSectorExpiration
.
The Miner lifecycle in the power table should be roughly as follows:
MinerRegistration
: A new miner with an associated worker public key and address is registered on the power table by the storage mining subsystem, along with their associated sector size (there is only one per worker).UpdatePower
: These power increments and decrements are called by various storage actors (and must thus be verified by every full node on the network). Specifically:- Power is incremented at ProveCommit, as a subcall of
miner.ProveCommitSector
orminer.ProveCommitAggregate
- Power of a partition is decremented immediately after a missed WindowPoSt (
DetectedFault
). - A particular sector’s power is decremented when it enters into a faulty state either through Declared Faults or Skipped Faults.
- A particular sector’s power is added back after recovery is declared and proven by PoSt.
- A particular sector’s power is removed when the sector is expired or terminated through miner invocation.
- Power is incremented at ProveCommit, as a subcall of
To summarize, only sectors in the Active state will command power. A Sector becomes Active when it is added upon ProveCommit
. Power is immediately decremented when it enters into the faulty state. Power will be restored when its declared recovery is proven. A sector’s power is removed when it is expired or terminated through miner invocation.
Pledge Collateral
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.pledge-collateral
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_blockchain.storage_power_consensus.storage_power_actor.pledge-collateral
Pledge Collateral is slashed for any fault affecting storage-power consensus, these include:
- faults to expected consensus in particular (see
Consensus Faults), which will be reported by a slasher to the
StoragePowerActor
in exchange for a reward. - faults affecting consensus power more generally, specifically uncommitted power faults (i.e.
Storage Faults), which will be reported by the
CronActor
automatically or when a miner terminates a sector earlier than its promised duration.
For a more detailed discussion on Pledge Collateral, please see the Miner Collaterals section.
Token
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token
Minting Model
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token.minting_model
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token.minting_model
Many blockchains mint tokens based on a simple exponential decay model. Under this model, block rewards are highest in the beginning, and miner participation is often the lowest, so mining generates many tokens per unit of work early in the networkʼs life, then rapidly decreases.
Over many cryptoeconomic simulations, it became clear that the simple exponential decay model would encourage short-term behavior around network launch with an unhealthy effect on the Filecoin Economy. Specifically, it would incentivize storage miners to over-invest in hardware for the sealing stage of mining to onboard storage as quickly as possible. It would be profitable to exit the network after exhausting these early rewards, even if it resulted in losing client data. This would harm the network: clients would lose data and have less access to long-term storage, and miners would have little incentive to contribute more resources to the network. Additionally, this would result in the majority of network subsidies being paid based wholly on timing, rather than actual storage (and hence value) provided to the network.
To encourage consistent storage onboarding and investment in long-term storage, not just rapid sealing, Filecoin introduces the concept of a network baseline. Instead of minting tokens based purely on elapsed time, block rewards instead scale up as total storage power on the network increases. This preserves the shape of the original exponential decay model, but softens it in the earliest days of the network. Once the network reaches the baseline, the cumulative block reward issued is identical to a simple exponential decay model, but if the network does not pass the pre-established threshold, a portion of block rewards are deferred. The overall result is that Filecoin rewards to miners more closely match the utility they, and the network as a whole, provide to clients.
Specifically, a hybrid exponential minting mechanism is introduced with a proportion of the reward coming from simple exponential decay, “Simple Minting” and the other proportion from network baseline, “Baseline Minting”. The total reward per epoch will be the sum of the two rewards. Mining Filecoin should be even more profitable with this mechanism. Simple minting allocation disproportionately rewards early miners and provides counter pressure to shocks. Baseline minting allocation mints more tokens when more value for the network has been created. More tokens are minted to facilitate greater trade when the network can unlock a greater potential. This should lead to increased creation of value for the network and lower risk of minting filecoin too quickly.
The protocol allocates 30% of Storage Mining Allocation in Simple Minting and the remaining 70% in Baseline Minting. 30% of Simple Minting can provide counter forces in the event of shocks. Baseline capacity can start from a smaller percentage of worldʼs storage today, grow at a rapid rate, and catch up to a higher but still reasonable percentage of worldʼs storage in the future. The network baseline will start from 2.5EiB, or 2.88888888EB, (which is less than 0.01% of the worldʼs storage today) and grow at an annual rate of 100% (higher than the usual world storage annual growth rate at 40%). The community can come together to slow down the rate of growth when the network is providing 1-10% of the worldʼs storage.
There are many features that will make passing the baseline more efficient and economical and unleash a greater share of baseline minting. The community can come together to collectively achieve these goals:
- More performant Proof of Replication algorithms, with lower on chain footprint, faster verification time, cheaper hardware requirement, different security assumptions, resulting in sectors with longer lifetime and enabling sector upgrades without reseal.
- A more scalable consensus algorithm that can provide greater throughput and handle larger volume with shorter finality.
- More deal functionalities that allow sectors to last for longer.
Lastly, it is important to note that while the block reward incentivizes participation, it cannot be treated as a resource to be exploited. It is a common pool of subsidies that seeds and grows the network to benefit the economy and participants. An example of different stages of the economy and different sources of subsidies is illustrated in the following Figure.
Block Reward Minting
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token.block_reward_minting
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token.block_reward_minting
In this section, we provide the mathematical specification for Simple Minting, Baseline Minting and Block Reward Issuance. We will provide the details and key mathematical properties for the above concepts.
Economic parameters
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token.block_reward_minting.economic-parameters
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token.block_reward_minting.economic-parameters
-
$M_\infty$
is the total asymptotic number of tokens to be emitted as storage-mining block rewards. Per the Token Allocation spec,$M_\infty := 55\% \cdot \texttt{FIL\_BASE} = 0.55 \cdot 2\times 10^9 FIL = 1.1 \times 10^9 FIL$
. The dimension of the$M_\infty$
quantity is tokens. -
$\lambda$
is the “simple exponential decay” minting rate corresponding to a 6-year half-life. The meaning of “simple exponential decay” is that the total minted supply at time$t$
is$M_\infty \cdot (1 - e^{-\lambda t})$
, so the specification of$\lambda$
in symbols becomes the equation$1 - e^{-\lambda \cdot 6yr} = \frac{1}{2}$
. Note that a “year” is poorly defined. The simplified definition of$1yr := 365d$
was agreed upon for Filecoin. Of course,$1d = 86400s$
, so$1yr = 31536000s$
. We can solve this equation as
The dimension of the $\lambda$
quantity is time$^{-1}$
.
-
$\gamma$
is the mixture between baseline and simple minting. A$\gamma$
value of 1.0 corresponds to pure baseline minting, while a$\gamma$
value of 0.0 corresponds to pure simple minting. We currently use$\gamma := 0.7$
. The$\gamma$
quantity is dimensionless. -
$b(t)$
is the baseline function, which was designed as an exponential
where
$b_0$
is the “initial baseline”. The dimension of the$b_0$
quantity is information.$g$
is related to the baseline’s “annual growth rate” ($g_a$
) by the equation$\exp(g \cdot 1yr) = 1 + g_a$
, which has the solution
While $g_a$
is dimensionless, the dimension of the $g$
quantity is time$^{-1}$
.
The dimension of the $b(t)$
quantity is information.
Simple Minting
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token.block_reward_minting.simple-minting
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token.block_reward_minting.simple-minting
-
$M_{\infty B}$
is the total number of tokens to be emitted via baseline minting:$M_{\infty B} = M_\infty \cdot \gamma$
. Correspondingly,$M_{\infty S}$
is the total asymptotic number of tokens to be emitted via simple minting:$M_{\infty S} = M_\infty \cdot (1 - \gamma)$
. Of course,$M_{\infty B} + M_{\infty S} = M_\infty$
. -
$M_S(t)$
is the total number of tokens that should ideally have been emitted by simple minting up until time$t$
. It is defined as$M_S(t) = M_{\infty S} \cdot (1 - e^{-\lambda t})$
. It is easy to verify that$\lim_{t\rightarrow\infty} M_S(t) = M_{\infty S}$
.
Note that $M_S(t)$
is easy to calculate, and can be determined quite independently of the network’s state. (This justifies the name “simple minting”.)
Baseline Minting
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token.block_reward_minting.baseline-minting
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token.block_reward_minting.baseline-minting
To define $M_B(t)$
(which is the number of tokens that should be emitted up until time $t$
by baseline minting), we must introduce a number of auxiliary variables, some of which depend on network state.
-
$R(t)$
is the instantaneous network raw-byte power (the total amount of bytes among all active sectors) at time$t$
. This quantity is state-dependent—it depends on the activities of miners on the network (specifically: commitment, expiration, faulting, and termination of sectors). The dimension of the$R(t)$
quantity is information. -
$\overline{R}(t)$
is the capped network raw-byte power, defined as$\overline{R}(t):= \min\{b(t), R(t)\}$
. Its dimension is also information. -
$\overline{R}_\Sigma(t)$
is the cumulative capped raw-byte power, defined as$\overline{R}_\Sigma(t) := \int_0^t \overline{R}(x)\, \mathrm{d}x$
. The dimension of$\overline{R_\Sigma}(t)$
isinformation$\cdot$time
(a dimension often referred to as “spacetime”). -
$\theta(t)$
is the “effective network time”, and is defined as the solution to the equation
By plugging in the definition of $b(x)$
and evaluating the integral, we can solve for a closed form of $\theta(t)$
as follows:
$M_B(t)$
is defined similarly to$M_S(t)$
, just with$\theta(t)$
in place of$t$
and$M_{\infty B}$
in place of$M_{\infty S}$
:
Block Reward Issuance
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token.block_reward_minting.block-reward-issuance
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token.block_reward_minting.block-reward-issuance
$M(t)$
, the total number of tokens to be emitted as expected block rewards up until time$t$
, is defined as the sum of simple and baseline minting:
Now we have defined a continuous target trajectory for cumulative minting. But minting actually occurs incrementally, and also in discrete increments. Periodically, a “tipset” is formed consisting of multiple winners, each of which receives an equal, finite amount of reward. A single miner may win multiple times, but may only submit one block and may still receive rewards as if they submitted multiple winning blocks. The mechanism by which multiple wins are rewarded is multiplication by a variable called WinCount
, so we refer to the finite quantity minted and awarded for each win as “reward per WinCount
” or “per win reward”.
$\tau$
is the duration of an “epoch” or “round” (these are synonymous). Per the spec,$\tau = 30s$
. The dimension of$\tau$
is time.$E$
is a parameter which determines the expected number of wins per round. While$E$
could be considered dimensionless, it useful to give it a dimension of “wins”. In Filecoin, the value of$E$
is 5.$W(n)$
is the total number of wins by all miners in the tipset during round$n$
. This also has dimension “wins”. For each$n$
,$W(n)$
is a random variable with the independent identical distribution$\mathrm{Poisson}(E)$
.$w(n)$
is the “reward perWinCount
” or “per win reward” for round$n$
. It is defined by:
The dimension of $W(n)$ is tokens$\cdot$wins$^{-1}$
.
- While
$M(t)$
is a continuous target for minted supply, the discrete and random amount of tokens which have been minted as of time$t$
is
$m(t)$
depends on past values of both $W(n)$
and $R(n\tau)$
.
Token Allocation
-
State
reliable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_token.token_allocation
-
State
reliable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_token.token_allocation
Filecoinʼs token distribution is broken down as follows. A maximum of 2,000,000,000 FIL will ever be created, referred to as FIL_BASE
. Of the Filecoin genesis block allocation, 10% of FIL_BASE
were allocated for fundraising, of which 7.5% were sold in the 2017 token sale, and the 2.5% remaining were allocated for ecosystem development and potential future fundraising. 15% of FIL_BASE
were allocated to Protocol Labs (including 4.5% for the PL team & contributors), and 5% were allocated to the Filecoin Foundation. The other 70% of all tokens were allocated to miners, as mining rewards, “for providing data storage service, maintaining the blockchain, distributing data, running contracts, and more.” There are multiple types of mining that these rewards will support over time; therefore, this allocation has been subdivided to cover different mining activities. A pie chart reflecting the FIL token allocation is shown in the following Figure.
Storage Mining Allocation. At network launch, the only mining group with allocated incentives will be storage miners. This is the earliest group of miners, and the one responsible for maintaining the core functionality of the protocol. Therefore, this group has been allocated the largest amount of mining rewards. 55% of FIL_BASE
(78.6% of mining rewards) is allocated to storage mining. This will cover primarily block rewards, which reward maintaining the blockchain, running actor code, and subsidizing reliable and useful storage. This amount will also cover early storage mining rewards, such as rewards in the SpaceRace competition and other potential types of storage miner initialization, such as faucets.
Mining Reserve. The Filecoin ecosystem must ensure incentives exist for all types of miners (e.g. retrieval miners, repair miners, and including future unknown types of miners) to support a robust economy. In order to ensure the network can provide incentives for these other types of miners, 15% of FIL_BASE
(21.4% of mining rewards) have been set aside as a Mining Reserve. It will be up to the community to determine in the future how to distribute those tokens, through Filecoin improvement proposals (FIPs) or similar decentralized decision making processes. For example, the community might decide to create rewards for retrieval mining or other types of mining-related activities. The Filecoin Network, like all blockchain networks and open source projects, will continue to evolve, adapt, and overcome challenges for many years. Reserving these tokens provides future flexibility for miners and the ecosystem as a whole. Other types of mining, like retrieval mining, are not yet subsidized and yet are very important to the Filecoin Economy; Arguably, those uses may need a larger percentage of mining rewards. As years pass and the network evolves, it will be up to the community to decide whether this reserve is enough, or whether to make adjustments with unmined tokens.
Market Cap. Various communities estimate the size of cryptocurrency and token networks using different analogous measures of market capitalization. The most sensible token supply for such calculations is FIL_CirculatingSupply
, because unmined, unvested, locked, and burnt funds are not circulating or tradeable in the economy. Any calculations using larger measures such as FIL_BASE
are likely to be erroneously inflated and not to be believed.
Total Burnt Funds. Some filecoin are burned to fund on-chain computations and bandwidth as network message fees, in addition to those burned in penalties for storage faults and consensus faults, creating long-term deflationary pressure on the token. Accompanying the network message fees is the priority fee that is not burned, but goes to the block-producing miners for including a message.
Parameter | Value | Description |
---|---|---|
FIL_BASE |
2,000,000,000 FIL | The maximum amount of FIL that will ever be created. |
FIL_MiningReserveAlloc |
300,000,000 FIL | Tokens reserved for funding mining to support growth of the Filecoin Economy, whose future usage will be decided by the Filecoin community |
FIL_StorageMiningAlloc |
1,100,000,000 FIL | The amount of FIL allocated to storage miners through block rewards, network initialization |
FIL_Vested |
Sum of genesis MultisigActors. AmountUnlocked |
Total amount of FIL that is vested from genesis allocation. |
FIL_StorageMined |
RewardActor. TotalStoragePowerReward |
The amount of FIL that has been mined by storage miners |
FIL_Locked |
TotalPledgeCollateral + TotalProviderDealCollateral + TotalClientDealCollateral + TotalPendingDealPayment + OtherLockedFunds |
The amount of FIL locked as part of mining, deals, and other mechanisms. |
FIL_CirculatingSupply |
FIL_Vested + FIL_Mined - TotalBurntFunds - FIL_Locked |
The amount of FIL circulating and tradeable in the economy. The basis for Market Cap calculations. |
TotalBurntFunds |
BurntFundsActor. Balance |
Total FIL burned as part of penalties and on-chain computations. |
TotalPledgeCollateral |
StoragePowerActor. TotalPledgeCollateral |
Total FIL locked as pledge collateral in all miners. |
TotalProviderDealCollateral |
StorageMarketActor. TotalProviderDealCollateral |
Total FIL locked as provider deal collateral |
TotalClientDealCollateral |
StorageMarketActor. TotalClientDealColateral |
Total FIL locked as client deal collateral |
TotalPendingDealPayment |
StorageMarketActor. TotalPendingDealPayment |
Total FIL locked as pending client deal payment |
Payment Channels
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_token.payment_channels
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_token.payment_channels
Payment channels are generally used as a mechanism to increase the scalability of blockchains and enable users to transact without involving (i.e., publishing their transactions on) the blockchain, which: i) increases the load of the system, and ii) incurs gas costs for the user. Payment channels generally use a smart contract as an agreement between the two participants. In the Filecoin blockchain Payment Channels are realised by the paychActor
.
The goal of the Payment Channel Actor specified here is to enable a series of off-chain microtransactions for applications built on top of Filecoin to be reconciled on-chain at a later time with fewer messages that involve the blockchain. Payment channels are already used in the Retrieval Market of the Filecoin Network, but their applicability is not constrained within this use-case only. Hence, here, we provide a detailed description of Payment Channels in the Filecoin network and then describe how Payment Channels are used in the specific case of the Filecoin Retrieval Market.
The payment channel actor can be used to open long-lived, flexible payment channels between users. Filecoin payment channels are uni-directional and can be funded by adding to their balance. Given the context of uni-directional payment channels, we define the payment channel sender as the party that receives some service, creates the channel, deposits funds and sends payments (hence the term payment channel sender). The payment channel recipient, on the other hand is defined as the party that provides services and receives payment for the services delivered (hence the term payment channel recipient). The fact that payment channels are uni-directional means that only the payment channel sender can add funds and the recipient can receive funds. Payment channels are identified by a unique address, as is the case with all Filecoin actors.
The payment channel state structure looks like this:
// A given payment channel actor is established by From (the receipent of a service)
// to enable off-chain microtransactions to To (the provider of a service) to be reconciled
// and tallied on chain.
type State struct {
// Channel owner, who has created and funded the actor - the channel sender
From addr.Address
// Recipient of payouts from channel
To addr.Address
// Amount successfully redeemed through the payment channel, paid out on `Collect()`
ToSend abi.TokenAmount
// Height at which the channel can be `Collected`
SettlingAt abi.ChainEpoch
// Height before which the channel `ToSend` cannot be collected
MinSettleHeight abi.ChainEpoch
// Collections of lane states for the channel, maintained in ID order.
LaneStates []*LaneState
}
Before continuing with the details of the Payment Channel and its components and features, it is worth defining a few terms.
- Voucher: a signed message created by either of the two channel parties that updates the channel balance. To differentiate to the payment channel sender/recipient, we refer to the voucher parties as voucher sender/recipient, who might or might not be the same as the payment channel ones (i.e., the voucher sender might be either the payment channel recipient or the payment channel sender).
- Redeeming a voucher: the voucher MUST be submitted on-chain by the opposite party from the one that created it. Redeeming a voucher does not trigger movement of funds from the channel to the recipient’s account, but it does incur message/gas costs. Vouchers can be redeemed at any time up to
Collect
(see below), as long as it has got a higherNonce
than a previously submitted one. UpdateChannelState
: this is the process by which a voucher is redeemed, i.e., a voucher is submitted (but not cashed-out) on-chain.Settle
: this process starts closing the channel. It can be called by either the channel creator (sender) or the channel recipient.Collect
: with this process funds are eventually transferred from the payment channel sender to the payment channel recipient. This process incurs message/gas costs.
Vouchers
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_token.payment_channels.vouchers
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_token.payment_channels.vouchers
Traditionally, in order to transact through a Payment Channel, the payment channel parties send to each other signed messages that update the balance of the channel. In Filecoin, these signed messages are called vouchers.
Throughout the interaction between the two parties, the channel sender (From
address) is sending vouchers to the recipient (To
address). The Value
included in the voucher indicates the value available for the receiving party to redeem. The Value
is based on the service that the payment channel recipient has provided to the payment channel sender. Either the payment channel recipient or the payment channel sender can Update
the balance of the channel and the balance ToSend
to the payment channel recipient (using a voucher), but the Update
(i.e., the voucher) has to be accepted by the other party before funds can be collected. Furthermore, the voucher has to be redeemed by the opposite party from the one that issued the voucher. The payment channel recipient can choose to Collect
this balance at any time incurring the corresponding gas cost.
Redeeming a voucher is not transferring funds from the payment channel to the recipient’s account. Instead, redeeming a voucher denotes the fact that some service worth of Value
has been provided by the payment channel recipient to the payment channel sender. It is not until the whole payment channel is collected that the funds are dispatched to the provider’s account.
This is the structure of the voucher:
// A voucher can be created and sent by any of the two parties. The `To` payment channel address can redeem the voucher and then `Collect` the funds.
type SignedVoucher struct {
// ChannelAddr is the address of the payment channel this signed voucher is valid for
ChannelAddr addr.Address
// TimeLockMin sets a min epoch before which the voucher cannot be redeemed
TimeLockMin abi.ChainEpoch
// TimeLockMax sets a max epoch beyond which the voucher cannot be redeemed
// TimeLockMax set to 0 means no timeout
TimeLockMax abi.ChainEpoch
// (optional) The SecretPreImage is used by `To` to validate
SecretPreimage []byte
// (optional) Extra can be specified by `From` to add a verification method to the voucher
Extra *ModVerifyParams
// Specifies which lane the Voucher is added to (will be created if does not exist)
Lane uint64
// Nonce is set by `From` to prevent redemption of stale vouchers on a lane
Nonce uint64
// Amount voucher can be redeemed for
Amount big.Int
// (optional) MinSettleHeight can extend channel MinSettleHeight if needed
MinSettleHeight abi.ChainEpoch
// (optional) Set of lanes to be merged into `Lane`
Merges []Merge
// Sender's signature over the voucher
Signature *crypto.Signature
}
Over the course of a transaction cycle, each participant in the payment channel can send Voucher
s to the other participant.
For instance, if the payment channel sender (From
address) has sent to the payment channel recipient (To
address) the following three vouchers (voucher_val, voucher_nonce)
for a lane with 100 FIL to be redeemed: (10, 1), (20, 2), (30, 3), then the recipient could choose to redeem (30, 3) bringing the lane’s value to 70 (100 - 30) and cancelling the preceding vouchers, i.e., they would not be able to redeem (10, 1) or (20, 2) anymore. However, they could redeem (20, 2), that is, 20 FIL, and then follow up with (30, 3) to redeem the remaining 10 FIL later.
It is worth highlighting that while the Nonce
is a strictly increasing value to denote the sequence of vouchers issued within the remit of a payment channel, the Value
is not a strictly increasing value. Decreasing Value
(although expected rarely) can be realized in cases of refunds that need to flow in the direction from the payment channel recipient to the payment channel sender. This can be the case when some bits arrive corrupted in the case of file retrieval, for instance.
Vouchers are signed by the party that creates them and are authenticated using a (Secret
, PreImage
) pair provided by the paying party (channel sender). If the PreImage
is indeed a pre-image of the Secret
when used as input to some given algorithm (typically a one-way function like a hash), the Voucher
is valid. The Voucher
itself contains the PreImage
but not the Secret
(communicated separately to the receiving party). This enables multi-hop payments since an intermediary cannot redeem a voucher on their own. Vouchers can also be used to update the minimum height at which a channel will be settled (i.e., closed), or have TimeLock
s to prevent voucher recipients from redeeming them too early. A channel can also have a MinCloseHeight
to prevent it being closed prematurely (e.g. before the payment channel recipient has collected funds) by the payment channel creator/sender.
Once their transactions have completed, either party can choose to Settle
(i.e., close) the channel. There is a 12hr period after Settle
during which either party can submit any outstanding vouchers. Once the vouchers are submitted, either party can then call Collect
. This will send the payment channel recipient the ToPay
amount from the channel, and the channel sender (From
address) will be refunded the remaining balance in the channel (if any).
Lanes
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_token.payment_channels.lanes
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_token.payment_channels.lanes
In addition, payment channels in Filecoin can be split into lane
s created as part of updating the channel state with a payment voucher
. Each lane has an associated nonce
and amount of tokens it can be redeemed
for. Lanes can be thought of as transactions for several different services provided by the channel recipient to the channel sender. The nonce
plays the role of a sequence number of vouchers within a given lane, where a voucher with a higher nonce replaces a voucher with a lower nonce.
Payment channel lanes allow for a lot of accounting between parties to be done off-chain and reconciled via single updates to the payment channel. The multiple lanes enable two parties to use a single payment channel to adjudicate multiple independent sets of payments.
One example of such accounting is merging of lanes. When a pair of channel sender-recipient nodes have a payment channel established between them with many lanes, the channel recipient will have to pay gas cost for each one of the lanes in order to Collect
funds. Merging of lanes allow the channel recipient to send a “merge” request to the channel sender to request merging of (some of the) lanes and consolidate the funds. This way, the recipient can reduce the overall gas cost. As an incentive for the channel sender to accept the merge lane request, the channel recipient can ask for a lower total value to balance out the gas cost. For instance, if the recipient has collected vouchers worth of 10 FIL from two lanes, say 5 from each, and the gas cost of submitting the vouchers for these funds is 2, then it can ask for 9 from the creator if the latter accepts to merge the two lanes. This way, the channel sender pays less overall for the services it received and the channel recipient pays less gas cost to submit the voucher for the services they provided.
Lifecycle of a Payment Channel
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_token.payment_channels.lifecycle-of-a-payment-channel
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_token.payment_channels.lifecycle-of-a-payment-channel
Summarising, we have the following sequence:
- Two parties agree to a series of transactions (for instance as part of file retrieval) with one party paying the other party up to some total sum of Filecoin over time. This is part of the deal-phase, it takes place off-chain and does not (at this stage) involve payment channels.
- The Payment Channel Actor is used, called the payment channel sender (who is the recipient of some service, e.g., file in case of file retrieval) to create the payment channel and deposit funds.
- Any of the two parties can create vouchers to send to the other party.
- The voucher recipient saves the voucher locally. Each voucher has to be submitted by the opposite party from the one that created the voucher.
- Either immediately or later, the voucher recipient “redeems” the voucher by submitting it to the chain, calling
UpdateChannelState
- The channel sender or the channel recipient
Settle
the payment channel. - 12-hour period to close the channel begins.
- If any of the two parties have outstanding (i.e., non-redeemed) vouchers, they should now submit the vouchers to the chain (there should be the option of this being done automatically). If the channel recipient so desires, they should send a “merge lanes” request to the sender.
- 12-hour period ends.
- Either the channel sender or the channel recipient calls
Collect
. - Funds are transferred to the channel recipient’s account and any unclaimed balance goes back to channel sender.
Payment Channels as part of the Filecoin Retrieval
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_token.payment_channels.payment-channels-as-part-of-the-filecoin-retrieval
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_token.payment_channels.payment-channels-as-part-of-the-filecoin-retrieval
Payment Channels are used in the Filecoin Retrieval Market to enable efficient off-chain payments and accounting between parties for what is expected to be a series of microtransactions, as these occur during data retrieval.
In particular, given that there is no proving method provided for the act of sending data from a provider (miner) to a client, there is no trust anchor between the two. Therefore, in order to avoid mis-behaviour, Filecoin is making use of payment channels in order to realise a step-wise “data transfer <-> payment” relationship between the data provider and the client (data receiver). Clients issue requests for data that miners are responding to. The miner is entitled to ask for interim payments, the volume-oriented interval for which is agreed in the Deal phase. In order to facilitate this process, the Filecoin client is creating a payment channel once the provider has agreed on the proposed deal. The client should also lock monetary value in the payment channel equal to the one needed for retrieval of the entire block of data requested. Every time a provider is completing transfer of the pre-specified amount of data, they can request a payment. The client is responding to this payment with a voucher which the provider can redeem (immediately or later), as per the process described earlier.
package paychmgr
import (
"context"
"errors"
"fmt"
"github.com/ipfs/go-cid"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
cborutil "github.com/filecoin-project/go-cbor-util"
actorstypes "github.com/filecoin-project/go-state-types/actors"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/builtin/v8/paych"
"github.com/filecoin-project/lotus/api"
lpaych "github.com/filecoin-project/lotus/chain/actors/builtin/paych"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/lib/sigs"
)
// insufficientFundsErr indicates that there are not enough funds in the
// channel to create a voucher
type insufficientFundsErr interface {
Shortfall() types.BigInt
}
type ErrInsufficientFunds struct {
shortfall types.BigInt
}
func newErrInsufficientFunds(shortfall types.BigInt) *ErrInsufficientFunds {
return &ErrInsufficientFunds{shortfall: shortfall}
}
func (e *ErrInsufficientFunds) Error() string {
return fmt.Sprintf("not enough funds in channel to cover voucher - shortfall: %d", e.shortfall)
}
func (e *ErrInsufficientFunds) Shortfall() types.BigInt {
return e.shortfall
}
type laneState struct {
redeemed big.Int
nonce uint64
}
func (ls laneState) Redeemed() (big.Int, error) {
return ls.redeemed, nil
}
func (ls laneState) Nonce() (uint64, error) {
return ls.nonce, nil
}
// channelAccessor is used to simplify locking when accessing a channel
type channelAccessor struct {
from address.Address
to address.Address
// chctx is used by background processes (eg when waiting for things to be
// confirmed on chain)
chctx context.Context
sa *stateAccessor
api managerAPI
store *Store
lk *channelLock
fundsReqQueue []*fundsReq
msgListeners msgListeners
}
func newChannelAccessor(pm *Manager, from address.Address, to address.Address) *channelAccessor {
return &channelAccessor{
from: from,
to: to,
chctx: pm.ctx,
sa: pm.sa,
api: pm.pchapi,
store: pm.store,
lk: &channelLock{globalLock: &pm.lk},
msgListeners: newMsgListeners(),
}
}
func (ca *channelAccessor) messageBuilder(ctx context.Context, from address.Address) (lpaych.MessageBuilder, error) {
nwVersion, err := ca.api.StateNetworkVersion(ctx, types.EmptyTSK)
if err != nil {
return nil, err
}
av, err := actorstypes.VersionForNetwork(nwVersion)
if err != nil {
return nil, err
}
return lpaych.Message(av, from), nil
}
func (ca *channelAccessor) getChannelInfo(ctx context.Context, addr address.Address) (*ChannelInfo, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
return ca.store.ByAddress(ctx, addr)
}
func (ca *channelAccessor) outboundActiveByFromTo(ctx context.Context, from, to address.Address) (*ChannelInfo, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
return ca.store.OutboundActiveByFromTo(ctx, ca.api, from, to)
}
// createVoucher creates a voucher with the given specification, setting its
// nonce, signing the voucher and storing it in the local datastore.
// If there are not enough funds in the channel to create the voucher, returns
// the shortfall in funds.
func (ca *channelAccessor) createVoucher(ctx context.Context, ch address.Address, voucher paych.SignedVoucher) (*api.VoucherCreateResult, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
// Find the channel for the voucher
ci, err := ca.store.ByAddress(ctx, ch)
if err != nil {
return nil, xerrors.Errorf("failed to get channel info by address: %w", err)
}
// Set the voucher channel
sv := &voucher
sv.ChannelAddr = ch
// Get the next nonce on the given lane
sv.Nonce = ca.nextNonceForLane(ci, voucher.Lane)
// Sign the voucher
vb, err := sv.SigningBytes()
if err != nil {
return nil, xerrors.Errorf("failed to get voucher signing bytes: %w", err)
}
sig, err := ca.api.WalletSign(ctx, ci.Control, vb)
if err != nil {
return nil, xerrors.Errorf("failed to sign voucher: %w", err)
}
sv.Signature = sig
// Store the voucher
if _, err := ca.addVoucherUnlocked(ctx, ch, sv, types.NewInt(0)); err != nil {
// If there are not enough funds in the channel to cover the voucher,
// return a voucher create result with the shortfall
var ife insufficientFundsErr
if errors.As(err, &ife) {
return &api.VoucherCreateResult{
Shortfall: ife.Shortfall(),
}, nil
}
return nil, xerrors.Errorf("failed to persist voucher: %w", err)
}
return &api.VoucherCreateResult{Voucher: sv, Shortfall: types.NewInt(0)}, nil
}
func (ca *channelAccessor) nextNonceForLane(ci *ChannelInfo, lane uint64) uint64 {
var maxnonce uint64
for _, v := range ci.Vouchers {
if v.Voucher.Lane == lane {
if v.Voucher.Nonce > maxnonce {
maxnonce = v.Voucher.Nonce
}
}
}
return maxnonce + 1
}
func (ca *channelAccessor) checkVoucherValid(ctx context.Context, ch address.Address, sv *paych.SignedVoucher) (map[uint64]lpaych.LaneState, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
return ca.checkVoucherValidUnlocked(ctx, ch, sv)
}
func (ca *channelAccessor) checkVoucherValidUnlocked(ctx context.Context, ch address.Address, sv *paych.SignedVoucher) (map[uint64]lpaych.LaneState, error) {
if sv.ChannelAddr != ch {
return nil, xerrors.Errorf("voucher ChannelAddr doesn't match channel address, got %s, expected %s", sv.ChannelAddr, ch)
}
// check voucher is unlocked
if sv.Extra != nil {
return nil, xerrors.Errorf("voucher is Message Locked")
}
if sv.TimeLockMax != 0 {
return nil, xerrors.Errorf("voucher is Max Time Locked")
}
if sv.TimeLockMin != 0 {
return nil, xerrors.Errorf("voucher is Min Time Locked")
}
if len(sv.SecretHash) != 0 {
return nil, xerrors.Errorf("voucher is Hash Locked")
}
// Load payment channel actor state
act, pchState, err := ca.sa.loadPaychActorState(ctx, ch)
if err != nil {
return nil, err
}
// Load channel "From" account actor state
f, err := pchState.From()
if err != nil {
return nil, err
}
from, err := ca.api.ResolveToDeterministicAddress(ctx, f, nil)
if err != nil {
return nil, err
}
// verify voucher signature
vb, err := sv.SigningBytes()
if err != nil {
return nil, err
}
// TODO: technically, either party may create and sign a voucher.
// However, for now, we only accept them from the channel creator.
// More complex handling logic can be added later
if err := sigs.Verify(sv.Signature, from, vb); err != nil {
return nil, err
}
// Check the voucher against the highest known voucher nonce / value
laneStates, err := ca.laneState(ctx, pchState, ch)
if err != nil {
return nil, err
}
// If the new voucher nonce value is less than the highest known
// nonce for the lane
ls, lsExists := laneStates[sv.Lane]
if lsExists {
n, err := ls.Nonce()
if err != nil {
return nil, err
}
if sv.Nonce <= n {
return nil, fmt.Errorf("nonce too low")
}
// If the voucher amount is less than the highest known voucher amount
r, err := ls.Redeemed()
if err != nil {
return nil, err
}
if sv.Amount.LessThanEqual(r) {
return nil, fmt.Errorf("voucher amount is lower than amount for voucher with lower nonce")
}
}
// Total redeemed is the total redeemed amount for all lanes, including
// the new voucher
// eg
//
// lane 1 redeemed: 3
// lane 2 redeemed: 2
// voucher for lane 1: 5
//
// Voucher supersedes lane 1 redeemed, therefore
// effective lane 1 redeemed: 5
//
// lane 1: 5
// lane 2: 2
// -
// total: 7
totalRedeemed, err := ca.totalRedeemedWithVoucher(laneStates, sv)
if err != nil {
return nil, err
}
// Total required balance must not exceed actor balance
if act.Balance.LessThan(totalRedeemed) {
return nil, newErrInsufficientFunds(types.BigSub(totalRedeemed, act.Balance))
}
if len(sv.Merges) != 0 {
return nil, fmt.Errorf("dont currently support paych lane merges")
}
return laneStates, nil
}
func (ca *channelAccessor) checkVoucherSpendable(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, secret []byte) (bool, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
recipient, err := ca.getPaychRecipient(ctx, ch)
if err != nil {
return false, err
}
ci, err := ca.store.ByAddress(ctx, ch)
if err != nil {
return false, err
}
// Check if voucher has already been submitted
submitted, err := ci.wasVoucherSubmitted(sv)
if err != nil {
return false, err
}
if submitted {
return false, nil
}
mb, err := ca.messageBuilder(ctx, recipient)
if err != nil {
return false, err
}
mes, err := mb.Update(ch, sv, secret)
if err != nil {
return false, err
}
ret, err := ca.api.Call(ctx, mes, nil)
if err != nil {
return false, err
}
if ret.MsgRct.ExitCode != 0 {
return false, nil
}
return true, nil
}
func (ca *channelAccessor) getPaychRecipient(ctx context.Context, ch address.Address) (address.Address, error) {
_, state, err := ca.api.GetPaychState(ctx, ch, nil)
if err != nil {
return address.Address{}, err
}
return state.To()
}
func (ca *channelAccessor) addVoucher(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, minDelta types.BigInt) (types.BigInt, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
return ca.addVoucherUnlocked(ctx, ch, sv, minDelta)
}
func (ca *channelAccessor) addVoucherUnlocked(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, minDelta types.BigInt) (types.BigInt, error) {
ci, err := ca.store.ByAddress(ctx, ch)
if err != nil {
return types.BigInt{}, err
}
// Check if the voucher has already been added
for _, v := range ci.Vouchers {
eq, err := cborutil.Equals(sv, v.Voucher)
if err != nil {
return types.BigInt{}, err
}
if eq {
// Ignore the duplicate voucher.
log.Warnf("AddVoucher: voucher re-added")
return types.NewInt(0), nil
}
}
// Check voucher validity
laneStates, err := ca.checkVoucherValidUnlocked(ctx, ch, sv)
if err != nil {
return types.NewInt(0), err
}
// The change in value is the delta between the voucher amount and
// the highest previous voucher amount for the lane
laneState, exists := laneStates[sv.Lane]
redeemed := big.NewInt(0)
if exists {
redeemed, err = laneState.Redeemed()
if err != nil {
return types.NewInt(0), err
}
}
delta := types.BigSub(sv.Amount, redeemed)
if minDelta.GreaterThan(delta) {
return delta, xerrors.Errorf("addVoucher: supplied token amount too low; minD=%s, D=%s; laneAmt=%s; v.Amt=%s", minDelta, delta, redeemed, sv.Amount)
}
ci.Vouchers = append(ci.Vouchers, &VoucherInfo{
Voucher: sv,
})
if ci.NextLane <= sv.Lane {
ci.NextLane = sv.Lane + 1
}
return delta, ca.store.putChannelInfo(ctx, ci)
}
func (ca *channelAccessor) submitVoucher(ctx context.Context, ch address.Address, sv *paych.SignedVoucher, secret []byte) (cid.Cid, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
ci, err := ca.store.ByAddress(ctx, ch)
if err != nil {
return cid.Undef, err
}
has, err := ci.hasVoucher(sv)
if err != nil {
return cid.Undef, err
}
// If the channel has the voucher
if has {
// Check that the voucher hasn't already been submitted
submitted, err := ci.wasVoucherSubmitted(sv)
if err != nil {
return cid.Undef, err
}
if submitted {
return cid.Undef, xerrors.Errorf("cannot submit voucher that has already been submitted")
}
}
mb, err := ca.messageBuilder(ctx, ci.Control)
if err != nil {
return cid.Undef, err
}
msg, err := mb.Update(ch, sv, secret)
if err != nil {
return cid.Undef, err
}
smsg, err := ca.api.MpoolPushMessage(ctx, msg, nil)
if err != nil {
return cid.Undef, err
}
// If the channel didn't already have the voucher
if !has {
// Add the voucher to the channel
ci.Vouchers = append(ci.Vouchers, &VoucherInfo{
Voucher: sv,
})
}
// Mark the voucher and any lower-nonce vouchers as having been submitted
err = ca.store.MarkVoucherSubmitted(ctx, ci, sv)
if err != nil {
return cid.Undef, err
}
return smsg.Cid(), nil
}
func (ca *channelAccessor) allocateLane(ctx context.Context, ch address.Address) (uint64, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
return ca.store.AllocateLane(ctx, ch)
}
func (ca *channelAccessor) listVouchers(ctx context.Context, ch address.Address) ([]*VoucherInfo, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
// TODO: just having a passthrough method like this feels odd. Seems like
// there should be some filtering we're doing here
return ca.store.VouchersForPaych(ctx, ch)
}
// laneState gets the LaneStates from chain, then applies all vouchers in
// the data store over the chain state
func (ca *channelAccessor) laneState(ctx context.Context, state lpaych.State, ch address.Address) (map[uint64]lpaych.LaneState, error) {
// TODO: we probably want to call UpdateChannelState with all vouchers to be fully correct
// (but technically don't need to)
laneCount, err := state.LaneCount()
if err != nil {
return nil, err
}
// Note: we use a map instead of an array to store laneStates because the
// client sets the lane ID (the index) and potentially they could use a
// very large index.
laneStates := make(map[uint64]lpaych.LaneState, laneCount)
err = state.ForEachLaneState(func(idx uint64, ls lpaych.LaneState) error {
laneStates[idx] = ls
return nil
})
if err != nil {
return nil, err
}
// Apply locally stored vouchers
vouchers, err := ca.store.VouchersForPaych(ctx, ch)
if err != nil && err != ErrChannelNotTracked {
return nil, err
}
for _, v := range vouchers {
for range v.Voucher.Merges {
return nil, xerrors.Errorf("paych merges not handled yet")
}
// Check if there is an existing laneState in the payment channel
// for this voucher's lane
ls, ok := laneStates[v.Voucher.Lane]
// If the voucher does not have a higher nonce than the existing
// laneState for this lane, ignore it
if ok {
n, err := ls.Nonce()
if err != nil {
return nil, err
}
if v.Voucher.Nonce < n {
continue
}
}
// Voucher has a higher nonce, so replace laneState with this voucher
laneStates[v.Voucher.Lane] = laneState{v.Voucher.Amount, v.Voucher.Nonce}
}
return laneStates, nil
}
// Get the total redeemed amount across all lanes, after applying the voucher
func (ca *channelAccessor) totalRedeemedWithVoucher(laneStates map[uint64]lpaych.LaneState, sv *paych.SignedVoucher) (big.Int, error) {
// TODO: merges
if len(sv.Merges) != 0 {
return big.Int{}, xerrors.Errorf("dont currently support paych lane merges")
}
total := big.NewInt(0)
for _, ls := range laneStates {
r, err := ls.Redeemed()
if err != nil {
return big.Int{}, err
}
total = big.Add(total, r)
}
lane, ok := laneStates[sv.Lane]
if ok {
// If the voucher is for an existing lane, and the voucher nonce
// is higher than the lane nonce
n, err := lane.Nonce()
if err != nil {
return big.Int{}, err
}
if sv.Nonce > n {
// Add the delta between the redeemed amount and the voucher
// amount to the total
r, err := lane.Redeemed()
if err != nil {
return big.Int{}, err
}
delta := big.Sub(sv.Amount, r)
total = big.Add(total, delta)
}
} else {
// If the voucher is *not* for an existing lane, just add its
// value (implicitly a new lane will be created for the voucher)
total = big.Add(total, sv.Amount)
}
return total, nil
}
func (ca *channelAccessor) settle(ctx context.Context, ch address.Address) (cid.Cid, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
ci, err := ca.store.ByAddress(ctx, ch)
if err != nil {
return cid.Undef, err
}
mb, err := ca.messageBuilder(ctx, ci.Control)
if err != nil {
return cid.Undef, err
}
msg, err := mb.Settle(ch)
if err != nil {
return cid.Undef, err
}
smgs, err := ca.api.MpoolPushMessage(ctx, msg, nil)
if err != nil {
return cid.Undef, err
}
ci.Settling = true
err = ca.store.putChannelInfo(ctx, ci)
if err != nil {
log.Errorf("Error marking channel as settled: %s", err)
}
return smgs.Cid(), err
}
func (ca *channelAccessor) collect(ctx context.Context, ch address.Address) (cid.Cid, error) {
ca.lk.Lock()
defer ca.lk.Unlock()
ci, err := ca.store.ByAddress(ctx, ch)
if err != nil {
return cid.Undef, err
}
mb, err := ca.messageBuilder(ctx, ci.Control)
if err != nil {
return cid.Undef, err
}
msg, err := mb.Collect(ch)
if err != nil {
return cid.Undef, err
}
smsg, err := ca.api.MpoolPushMessage(ctx, msg, nil)
if err != nil {
return cid.Undef, err
}
return smsg.Cid(), nil
}
type SignedVoucher struct {
// ChannelAddr is the address of the payment channel this signed voucher is valid for
ChannelAddr addr.Address
// TimeLockMin sets a min epoch before which the voucher cannot be redeemed
TimeLockMin abi.ChainEpoch
// TimeLockMax sets a max epoch beyond which the voucher cannot be redeemed
// TimeLockMax set to 0 means no timeout
TimeLockMax abi.ChainEpoch
// (optional) The SecretPreImage is used by `To` to validate
SecretPreimage []byte
// (optional) Extra can be specified by `From` to add a verification method to the voucher
Extra *ModVerifyParams
// Specifies which lane the Voucher merges into (will be created if does not exist)
Lane uint64
// Nonce is set by `From` to prevent redemption of stale vouchers on a lane
Nonce uint64
// Amount voucher can be redeemed for
Amount big.Int
// (optional) MinSettleHeight can extend channel MinSettleHeight if needed
MinSettleHeight abi.ChainEpoch
// (optional) Set of lanes to be merged into `Lane`
Merges []Merge
// Sender's signature over the voucher
Signature *crypto.Signature
}
package paych
import (
"bytes"
addr "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/cbor"
"github.com/filecoin-project/go-state-types/exitcode"
paych0 "github.com/filecoin-project/specs-actors/actors/builtin/paych"
paych7 "github.com/filecoin-project/specs-actors/v7/actors/builtin/paych"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/specs-actors/v8/actors/builtin"
"github.com/filecoin-project/specs-actors/v8/actors/runtime"
"github.com/filecoin-project/specs-actors/v8/actors/util/adt"
)
const (
ErrChannelStateUpdateAfterSettled = exitcode.FirstActorSpecificExitCode + iota
)
type Actor struct{}
func (a Actor) Exports() []interface{} {
return []interface{}{
builtin.MethodConstructor: a.Constructor,
2: a.UpdateChannelState,
3: a.Settle,
4: a.Collect,
}
}
func (a Actor) Code() cid.Cid {
return builtin.PaymentChannelActorCodeID
}
func (a Actor) State() cbor.Er {
return new(State)
}
var _ runtime.VMActor = Actor{}
//type ConstructorParams struct {
// From addr.Address // Payer
// To addr.Address // Payee
//}
type ConstructorParams = paych0.ConstructorParams
// Constructor creates a payment channel actor. See State for meaning of params.
func (pca *Actor) Constructor(rt runtime.Runtime, params *ConstructorParams) *abi.EmptyValue {
// Only InitActor can create a payment channel actor. It creates the actor on
// behalf of the payer/payee.
rt.ValidateImmediateCallerType(builtin.InitActorCodeID)
// check that both parties are capable of signing vouchers
to, err := pca.resolveAccount(rt, params.To)
builtin.RequireNoErr(rt, err, exitcode.Unwrap(err, exitcode.ErrIllegalState), "failed to resolve to address: %s", params.To)
from, err := pca.resolveAccount(rt, params.From)
builtin.RequireNoErr(rt, err, exitcode.Unwrap(err, exitcode.ErrIllegalState), "failed to resolve from address: %s", params.From)
emptyArr, err := adt.MakeEmptyArray(adt.AsStore(rt), LaneStatesAmtBitwidth)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to create empty array")
emptyArrCid, err := emptyArr.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to persist empty array")
st := ConstructState(from, to, emptyArrCid)
rt.StateCreate(st)
return nil
}
// Resolves an address to a canonical ID address and requires it to address an account actor.
func (pca *Actor) resolveAccount(rt runtime.Runtime, raw addr.Address) (addr.Address, error) {
resolved, err := builtin.ResolveToIDAddr(rt, raw)
if err != nil {
return addr.Undef, exitcode.ErrIllegalState.Wrapf("failed to resolve address %v: %w", raw, err)
}
codeCID, ok := rt.GetActorCodeCID(resolved)
if !ok {
return addr.Undef, exitcode.ErrIllegalArgument.Wrapf("no code for address %v", resolved)
}
if codeCID != builtin.AccountActorCodeID {
return addr.Undef, exitcode.ErrForbidden.Wrapf("actor %v must be an account (%v), was %v", raw,
builtin.AccountActorCodeID, codeCID)
}
return resolved, nil
}
////////////////////////////////////////////////////////////////////////////////
// Payment Channel state operations
////////////////////////////////////////////////////////////////////////////////
type UpdateChannelStateParams = paych7.UpdateChannelStateParams
type SignedVoucher = paych7.SignedVoucher
func VoucherSigningBytes(t *SignedVoucher) ([]byte, error) {
osv := *t
osv.Signature = nil
buf := new(bytes.Buffer)
if err := osv.MarshalCBOR(buf); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// Modular Verification method
//type ModVerifyParams struct {
// // Actor on which to invoke the method.
// Actor addr.Address
// // Method to invoke.
// Method abi.MethodNum
// // Pre-serialized method parameters.
// Params []byte
//}
type ModVerifyParams = paych0.ModVerifyParams
// Specifies which `Lane`s to be merged with what `Nonce` on channelUpdate
//type Merge struct {
// Lane uint64
// Nonce uint64
//}
type Merge = paych0.Merge
func (pca Actor) UpdateChannelState(rt runtime.Runtime, params *UpdateChannelStateParams) *abi.EmptyValue {
var st State
rt.StateReadonly(&st)
// both parties must sign voucher: one who submits it, the other explicitly signs it
rt.ValidateImmediateCallerIs(st.From, st.To)
var signer addr.Address
if rt.Caller() == st.From {
signer = st.To
} else {
signer = st.From
}
sv := params.Sv
if sv.Signature == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher has no signature")
}
if st.SettlingAt != 0 && rt.CurrEpoch() >= st.SettlingAt {
rt.Abortf(ErrChannelStateUpdateAfterSettled, "no vouchers can be processed after SettlingAt epoch")
}
if len(params.Secret) > MaxSecretSize {
rt.Abortf(exitcode.ErrIllegalArgument, "secret must be at most 256 bytes long")
}
vb, err := VoucherSigningBytes(&sv)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "failed to serialize signedvoucher")
err = rt.VerifySignature(*sv.Signature, signer, vb)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalArgument, "voucher signature invalid")
pchAddr := rt.Receiver()
svpchIDAddr, found := rt.ResolveAddress(sv.ChannelAddr)
if !found {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher payment channel address %s does not resolve to an ID address", sv.ChannelAddr)
}
if pchAddr != svpchIDAddr {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher payment channel address %s does not match receiver %s", svpchIDAddr, pchAddr)
}
if rt.CurrEpoch() < sv.TimeLockMin {
rt.Abortf(exitcode.ErrIllegalArgument, "cannot use this voucher yet!")
}
if sv.TimeLockMax != 0 && rt.CurrEpoch() > sv.TimeLockMax {
rt.Abortf(exitcode.ErrIllegalArgument, "this voucher has expired!")
}
if sv.Amount.Sign() < 0 {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher amount must be non-negative, was %v", sv.Amount)
}
if len(sv.SecretHash) > 0 {
hashedSecret := rt.HashBlake2b(params.Secret)
if !bytes.Equal(hashedSecret[:], sv.SecretHash) {
rt.Abortf(exitcode.ErrIllegalArgument, "incorrect secret!")
}
}
if sv.Extra != nil {
code := rt.Send(
sv.Extra.Actor,
sv.Extra.Method,
builtin.CBORBytes(sv.Extra.Data),
abi.NewTokenAmount(0),
&builtin.Discard{},
)
builtin.RequireSuccess(rt, code, "spend voucher verification failed")
}
rt.StateTransaction(&st, func() {
laneFound := true
lstates, err := adt.AsArray(adt.AsStore(rt), st.LaneStates, LaneStatesAmtBitwidth)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load lanes")
// Find the voucher lane, creating if necessary.
laneId := sv.Lane
laneState := findLane(rt, lstates, sv.Lane)
if laneState == nil {
laneState = &LaneState{
Redeemed: big.Zero(),
Nonce: 0,
}
laneFound = false
}
if laneFound {
if laneState.Nonce >= sv.Nonce {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher has an outdated nonce, existing nonce: %d, voucher nonce: %d, cannot redeem",
laneState.Nonce, sv.Nonce)
}
}
// The next section actually calculates the payment amounts to update the payment channel state
// 1. (optional) sum already redeemed value of all merging lanes
redeemedFromOthers := big.Zero()
for _, merge := range sv.Merges {
if merge.Lane == sv.Lane {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher cannot merge lanes into its own lane")
}
otherls := findLane(rt, lstates, merge.Lane)
if otherls == nil {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher specifies invalid merge lane %v", merge.Lane)
return // makes linters happy
}
if otherls.Nonce >= merge.Nonce {
rt.Abortf(exitcode.ErrIllegalArgument, "merged lane in voucher has outdated nonce, cannot redeem")
}
redeemedFromOthers = big.Add(redeemedFromOthers, otherls.Redeemed)
otherls.Nonce = merge.Nonce
err = lstates.Set(merge.Lane, otherls)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store lane %d", merge.Lane)
}
// 2. To prevent double counting, remove already redeemed amounts (from
// voucher or other lanes) from the voucher amount
laneState.Nonce = sv.Nonce
balanceDelta := big.Sub(sv.Amount, big.Add(redeemedFromOthers, laneState.Redeemed))
// 3. set new redeemed value for merged-into lane
laneState.Redeemed = sv.Amount
newSendBalance := big.Add(st.ToSend, balanceDelta)
// 4. check operation validity
if newSendBalance.LessThan(big.Zero()) {
rt.Abortf(exitcode.ErrIllegalArgument, "voucher would leave channel balance negative")
}
if newSendBalance.GreaterThan(rt.CurrentBalance()) {
rt.Abortf(exitcode.ErrIllegalArgument, "not enough funds in channel to cover voucher")
}
// 5. add new redemption ToSend
st.ToSend = newSendBalance
// update channel settlingAt and MinSettleHeight if delayed by voucher
if sv.MinSettleHeight != 0 {
if st.SettlingAt != 0 && st.SettlingAt < sv.MinSettleHeight {
st.SettlingAt = sv.MinSettleHeight
}
if st.MinSettleHeight < sv.MinSettleHeight {
st.MinSettleHeight = sv.MinSettleHeight
}
}
err = lstates.Set(laneId, laneState)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to store lane", laneId)
st.LaneStates, err = lstates.Root()
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to save lanes")
})
return nil
}
func (pca Actor) Settle(rt runtime.Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
var st State
rt.StateTransaction(&st, func() {
rt.ValidateImmediateCallerIs(st.From, st.To)
if st.SettlingAt != 0 {
rt.Abortf(exitcode.ErrIllegalState, "channel already settling")
}
st.SettlingAt = rt.CurrEpoch() + SettleDelay
if st.SettlingAt < st.MinSettleHeight {
st.SettlingAt = st.MinSettleHeight
}
})
return nil
}
func (pca Actor) Collect(rt runtime.Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
var st State
rt.StateReadonly(&st)
rt.ValidateImmediateCallerIs(st.From, st.To)
if st.SettlingAt == 0 || rt.CurrEpoch() < st.SettlingAt {
rt.Abortf(exitcode.ErrForbidden, "payment channel not settling or settled")
}
// send ToSend to "To"
codeTo := rt.Send(
st.To,
builtin.MethodSend,
nil,
st.ToSend,
&builtin.Discard{},
)
builtin.RequireSuccess(rt, codeTo, "Failed to send funds to `To`")
// the remaining balance will be returned to "From" upon deletion.
rt.DeleteActor(st.From)
return nil
}
// Returns the insertion index for a lane ID, with the matching lane state if found, or nil.
func findLane(rt runtime.Runtime, ls *adt.Array, id uint64) *LaneState {
if id > MaxLane {
rt.Abortf(exitcode.ErrIllegalArgument, "maximum lane ID is 2^63-1")
}
var out LaneState
found, err := ls.Get(id, &out)
builtin.RequireNoErr(rt, err, exitcode.ErrIllegalState, "failed to load lane %d", id)
if !found {
return nil
}
return &out
}
Multisig Wallet & Actor
-
State
reliable
-
Theory Audit
done
-
Edit this section
-
section-systems.filecoin_token.multisig
-
State
reliable
-
Theory Audit
done
- Edit this section
-
section-systems.filecoin_token.multisig
The Multisig actor is a single actor representing a group of Signers. Signers may be external users, other Multisigs, or even the Multisig itself. There should be a maximum of 256 signers in a multisig wallet. In case more signers are needed, then the multisigs should be combined into a tree.
The implementation of the Multisig Actor can be found here.
The Multisig Actor statuses can be found here.
Storage Mining
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining
The Storage Mining System is the part of the Filecoin Protocol that deals with storing Client’s data and producing proof artifacts that demonstrate correct storage behavior.
Storage Mining is one of the most central parts of the Filecoin protocol overall, as it provides all the required consensus algorithms based on proven storage power in the network. Miners are selected to mine blocks and extend the blockchain based on the storage power that they have committed to the network. Storage is added in unit of sectors and sectors are promises to the network that some storage will remain for a promised duration. In order to participate in Storage Mining, the storage miners have to: i) Add storage to the system, and ii) Prove that they maintain a copy of the data they have agreed to throughout the sector’s lifetime.
Storing data and producing proofs is a complex, highly optimizable process, with lots of tunable
choices. Miners should explore the design space to arrive at something that (a) satisfies protocol
and network-wide constraints, (b) satisfies clients’ requests and expectations (as expressed in
Deals
), and (c) gives them the most cost-effective operation. This part of the Filecoin Spec
primarily describes in detail what MUST and SHOULD happen here, and leaves ample room for
various optimizations for implementers, miners, and users to make. In some parts, we describe
algorithms that could be replaced by other, more optimized versions, but in those cases it is
important that the protocol constraints are satisfied. The protocol constraints are
spelled out in clear detail. It is up
to implementers who deviate from the algorithms presented here to ensure their modifications
satisfy those constraints, especially those relating to protocol security.
Sector
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_mining.sector
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_mining.sector
Sectors are the basic units of storage on Filecoin. They have standard sizes, as well as well-defined time-increments for commitments. The size of a sector balances security concerns against usability. A sectorʼs lifetime is determined in the storage market, and sets the promised duration of the sector.
In the first iteration of the protocol, 32GiB and 64GiB sectors are supported. Maximum sector lifetime is determined by the proof algorithm. Maximum sector lifetime is initially 18 months. A sector naturally expires when it reaches the end of its lifetime. Additionally, the miner can extend the lifetime of their sectors. Rewards are earned and collaterals recovered when the miner fulfils their commitment.
Individual deals are formed when a storage miner and client are matched on Filecoinʼs storage market. The protocol does not distinguish miners matching with real clients from miners generating self-deals. However, committed capacity is a construction that is introduced to make self-dealing unnecessary and economically irrational. In earlier designs of the network, only sectors filled with deals increased the minerʼs likelihood of winning the block reward. This led to the expectation that miners would attack and exploit the network by playing the role of both storage provider and client, creating a malicious self-deal.
If a sector is only partially full of deals, the network considers the remainder to be committed capacity. Similarly, sectors with no deals are called committed capacity sectors; miners are rewarded for proving to the network that they are pledging storage capacity and are encouraged to find clients who need storage. When a miner finds storage demand, they can upgrade their committed capacity sectors to earn additional revenue in the form of a deal fee from paying clients. More details on how to add storage and upgrade sectors in Adding Storage.
Committed capacity sectors improve minersʼ incentives to store client data, but they donʼt solve the problem entirely. Storing real client files adds some operational overhead for storage miners. In certain circumstances – for example, if a miner values block rewards far more than deal fees – miners might still choose to ignore client data entirely and simply store committed capacity to increase their storage power as rapidly as possible in pursuit of block rewards. This would make Filecoin less useful and limit clientsʼ ability to store data on the network. Filecoin addresses this issue by introducing the concept of verified clients. Verified clients are certified by a decentralized network of verifiers. Once verified, they can post a predetermined amount of verified client deal data to the storage market, set by the size of their DataCap. Sectors with verified client deals are awarded more storage power – and therefore more block rewards – than sectors without. This provides storage miners with an additional incentive to store client data.
Verification is not intended to be scarce – it will be very easy to acquire for anyone with real data to store on Filecoin. Even though verifiers may allocate verified client DataCaps liberally (yet responsibly and transparently) to make onboarding easier, the overall effect should be a dramatic increase in the proportion of useful data stored on Filecoin.
Once a sector is full (either with client data or as committed capacity), the unsealed sector is combined by a proving tree into a single root UnsealedSectorCID
. The sealing process then encodes (using CBOR) an unsealed sector into a sealed sector, with the root SealedSectorCID
.
This diagram shows the composition of an unsealed sector and a sealed sector.
Sector Storage & Window PoSt
The Lotus implementation of the Window PoSt scheduler can be found here and the actual execution of Window PoSt on a sector can be found here.
The Lotus block store implementation for sectors can be found here.
Sector Lifecycle
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_mining.sector.lifecycle
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_mining.sector.lifecycle
Once the sector has been generated and the deal has been incorporated into the Filecoin blockchain, the storage miner begins generating Proofs-of-Spacetime (PoSt) on the sector, starting to potentially win block rewards and also earn storage fees. Parameters are set so that miners generate and capture more value if they guarantee that their sectors will be around for the duration of the original contract. However, some bounds are placed on a sectorʼs lifetime to improve the network performance.
In particular, as sectors of shorter lifetime are added, the networkʼs capacity can be bottlenecked. The reason is that the chainʼs bandwidth is consumed with new sectors only replacing expiring ones. As a result, a minimum sector lifetime of six months was introduced to more effectively utilize chain bandwidth and miners have the incentive to commit to sectors of longer lifetime. The maximum sector lifetime is limited by the security of the present proofs construction. For a given set of proofs and parameters, the security of Filecoinʼs Proof-of-Replication (PoRep) is expected to decrease as sector lifetimes increase.
It is reasonable to assume that miners enter the network by adding Committed Capacity sectors, that is, sectors that do not contain user data. Once miners agree storage deals with clients, they upgrade their sectors to Regular Sectors. Alternatively, if they find Verified Clients and agree a storage deal with them, they upgrade their sector accordingly. Depending on whether or not a sector includes a (verified) deal, the miner acquires the corresponding storage power in the network.
All sectors are expected to remain live until the end of their sector lifetime and early dropping of sectors will result in slashing. This is done to provide clients a certain level of guarantee on the reliability of their hosted data. Sector termination comes with a corresponding termination fee.
As with every system it is expected that sectors will present faults. Although this might degrade the quality offered by the network, the reaction of the miner to the fault drives system decisions on whether or not the miner should be penalized. A miner can recover the faulty sector, let the system terminate the sector automatically after 42 days of faults, or proactively terminate the sector immediately in the case of unrecoverable data loss. In case of a faulty sector, a small penalty fee approximately equal to the block reward that the sector would win per day is applied. The fee is calculated per day of the sector being unavailable to the network, i.e. until the sector is recovered or terminated.
Miners can extend the lifetime of a sector at any time, though the sector will be expected to remain live until it has reached the end of the new sector lifetime. This can be done by submitting a ExtendedSectorExpiration
message to the chain.
A sector can be in one of the following states.
State | Description |
---|---|
Precommitted |
Miner seals sector and submits miner.PreCommitSector or miner.PreCommitSectorBatch |
Committed |
Miner generates a Seal proof and submits miner.ProveCommitSector or miner.ProveCommitAggregate |
Active |
Miner generate valid PoSt proofs and timely submits miner.SubmitWindowedPoSt |
Faulty |
Miner fails to generate a proof (see Fault section) |
Recovering |
Miner declared a faulty sector via miner.DeclareFaultRecovered |
Terminated |
Either sector is expired, or early terminated by a miner via miner.TerminateSectors , or was failed to be proven for 42 consecutive proving periods. |
Sector Quality
-
State
stable
-
Theory Audit
n/a
-
Edit this section
-
section-systems.filecoin_mining.sector.sector-quality
-
State
stable
-
Theory Audit
n/a
- Edit this section
-
section-systems.filecoin_mining.sector.sector-quality
Given different sector contents, not all sectors have the same usefulness to the network. The notion of Sector Quality distinguishes between sectors with heuristics indicating the presence of valuable data. That distinction is used to allocate more subsidies to higher-quality sectors. To quantify the contribution of a sector to the consensus power of the network, some relevant parameters are described here.
- Sector Spacetime: This measurement is the sector size multiplied by its promised duration in byte-epochs.
- Deal Weight: This weight converts spacetime occupied by deals into consensus power. Deal weight of verified client deals in a sector is called Verified Deal Weight and will be greater than the regular deal weight.
- Deal Quality Multiplier: This factor is assigned to different deal types (committed capacity, regular deals, and verified client deals) to reward different content.
- Sector Quality Multiplier: Sector quality is assigned on Activation (the epoch when the miner starts proving theyʼre storing the file). The sector quality multiplier is computed as an average of deal quality multipliers (committed capacity, regular deals, and verified client deals), weighted by the amount of spacetime each type of deal occupies in the sector.
- Raw Byte Power: This measurement is the size of a sector in bytes.
- Quality-Adjusted Power: This parameter measures the consensus power of stored data on the network, and is equal to Raw Byte Power multiplied by Sector Quality Multiplier.
The multipliers for committed capacity and regular deals are equal to make self dealing irrational in the current configuration of the protocol. In the future, it may make sense to pick different values, depending on other ways of preventing attacks becoming available.
The high quality multiplier and easy verification process for verified client deals facilitates decentralization of miner power. Unlike other proof-of-work-based protocols, like Bitcoin, central control of the network is not simply decided based on the resources that a new participant can bring. In Filecoin, accumulating control either requires significantly more resources or some amount of consent from verified clients, who must make deals with the centralized miners for them to increase their influence. Verified client mechanisms add a layer of social trust to a purely resource driven network. As long as the process is fair and transparent with accountability and bounded trust, abuse can be contained and minimized. A high sector quality multiplier is a very powerful leverage for clients to push storage providers to build features that will be useful to the network as a whole and increase the networkʼs long-term value. The verification process and DataCap allocation are meant to evolve over time as the community learns to automate and improve this process. An illustration of sectors with various contents and their respective sector qualities are shown in the following Figure.
Sector Quality Adjusted Power is a weighted average of the quality of its space and it is based on the size, duration and quality of its deals.
Name | Description |
---|---|
QualityBaseMultiplier (QBM) | Multiplier for power for storage without deals. |
DealWeightMultiplier (DWM) | Multiplier for power for storage with deals. |
VerifiedDealWeightMultiplier (VDWM) | Multiplier for power for storage with verified deals. |
The formula for calculating Sector Quality Adjusted Power (or QAp, often referred to as power) makes use of the following factors:
dealSpaceTime
: sum of theduration*size
of each dealverifiedSpaceTime
: sum of theduration*size
of each verified dealbaseSpaceTime
(spacetime without deals):sectorSize*sectorDuration - dealSpaceTime - verifiedSpaceTime
Based on these the average quality of a sector is:
$avgQuality = \frac{baseSpaceTime*QBM + dealSpaceTime*DWM + verifiedSpaceTime*VDWM}{sectorSize*sectorDuration*QBM}$The Sector Quality Adjusted Power is:
$sectorQuality = avgQuality*size$During miner.PreCommitSector
and miner.PreCommitSectorBatch
, the sector quality is calculated and stored in the sector information.
Sector Sealing
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.sector.sealing
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.sector.sealing
Before a Sector can be used, the Miner must seal the Sector: encode the data in the Sector to prepare it for the proving process.
- Unsealed Sector: A Sector of raw data.
- UnsealedCID (CommD): The root hash of the Unsealed Sector’s merkle tree. Also called CommD, or “data commitment.”
- Sealed Sector: A Sector that has been encoded to prepare it for the proving process.
- SealedCID (CommR): The root hash of the Sealed Sector’s merkle tree. Also called CommR, or “replica commitment.”
Sealing a sector through Proof-of-Replication (PoRep) is a computation-intensive process that results in a unique encoding of the sector. Once data is sealed, storage miners: generate a proof; run a SNARK on the proof to compress it; and finally, submit the result of the compression to the blockchain as a certification of the storage commitment. Depending on the PoRep algorithm and protocol security parameters, cost profiles and performance characteristics vary and tradeoffs have to be made among sealing cost, security, onchain footprint, retrieval latency and so on. However, sectors can be sealed with commercial hardware and sealing cost is expected to decrease over time. The Filecoin Protocol will launch with Stacked Depth Robust (SDR) PoRep with a planned upgrade to Narrow Stacked Expander (NSE) PoRep with improvement in both cost and retrieval latency.
The Lotus-specific set of functions applied to the sealing of a sector can be found here.
Randomness
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.sector.sealing.randomness
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.sector.sealing.randomness
Randomness is an important attribute that helps the network verify the integrity of Miners’ stored data. Filecoin’s block creation process includes two types of randomness:
- DRAND: Values pulled from a distributed random beacon
- VRF: The output of a Verifiable Random Function (VRF), which takes the previous block’s VRF value and produces the current block’s VRF value.
Each block produced in Filecoin includes values pulled from these two sources of randomness.
When Miners submit proofs about their stored data, the proofs incorporate references to randomness added at specific epochs. Assuming these values were not able to be predicted ahead of time, this helps ensure that Miners generated proofs at a specific point in time.
There are two proof types. Each uses one of the two sources of randomness:
- Windowed PoSt: Uses Drand values
- Proof of Replication (PoRep): Uses VRF values
Drawing randomness for sector commitments
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.sector.sealing.drawing-randomness-for-sector-commitments
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.sector.sealing.drawing-randomness-for-sector-commitments
Tickets are used as input to calculation of the ReplicaID in order to tie Proofs-of-Replication to a given chain, thereby preventing long-range attacks (from another miner in the future trying to reuse SEALs).
The ticket has to be drawn from a finalized block in order to prevent the miner from potential losing storage (in case of a chain reorg) even though their storage is intact.
Verification should ensure that the ticket was drawn no farther back than necessary by the miner. We note that tickets can uniquely be associated with a given round in the protocol (lest a hash collision be found), but that the round number is explicited by the miner in commitSector
.
We present precisely how ticket selection and verification should work. In the below, we use the following notation:
F
– Finality (number of rounds)X
– round in which SEALing startsZ
– round in which the SEAL appears (in a block)Y
– round announced in the SEALcommitSector
(should be X, but a miner could use any Y <= X), denoted by the ticket selectionT
– estimated time for SEAL, dependent on sector sizeG = T + variance
– necessary flexibility to account for network delay and SEAL-time variance.
We expect Filecoin will be able to produce estimates for sector commitment time based on sector sizes, e.g.:
(estimate, variance) <--- SEALTime(sectors)
G and T will be selected using these.
Picking a Ticket to Seal: When starting to prepare a SEAL in round X, the miner should draw a ticket from X-F with which to compute the SEAL.
Verifying a Seal’s ticket: When verifying a SEAL in round Z, a verifier should ensure that the ticket used to generate the SEAL is found in the range of rounds [Z-T-F-G, Z-T-F+G]
.
Prover
─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
│
▼
X-F ◀───────F────────▶ X ◀──────────T─────────▶ Z
-G . +G . .
───(┌───────┐)───────────────( )──────────────────────( )────────▶
└───────┘ ' ' time
[Z-T-F-G, Z-T-F+G]
▲
└ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
Verifier
Note that the prover here is submitting a message on chain (i.e. the SEAL). Using an older ticket than necessary to generate the SEAL is something the miner may do to gain more confidence about finality (since we are in a probabilistically final system). However it has a cost in terms of securing the chain in the face of long-range attacks (specifically, by mixing in chain randomness here, we ensure that an attacker going back a month in time to try and create their own chain would have to completely regenerate any and all sectors drawing randomness since to use for their fork’s power).
We break this down as follows:
- The miner should draw from
X-F
. - The verifier wants to find what
X-F
should have been (to ensure the miner is not drawing from farther back) even though Y (i.e. the round of the ticket actually used) is an unverifiable value. - Thus, the verifier will need to make an inference about what
X-F
is likely to have been based on:- (known) round in which the message is received (Z)
- (known) finality value (F)
- (approximate) SEAL time (T)
- Because T is an approximate value, and to account for network delay and variance in SEAL time across miners, the verifier allows for G offset from the assumed value of
X-F
:Z-T-F
, hence verifying that the ticket is drawn from the range[Z-T-F-G, Z-T-F+G]
.
In Practice, the Filecoin protocol will include a MAX_SEAL_TIME
for each sector size and proof type.
Sector Faults
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.sector.sector-faults
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.sector.sector-faults
It is very important for storage providers to have a strong incentive to both report the failure to the chain and attempt recovery from the fault in order to uphold the storage guarantee for the networkʼs clients. Without this incentive, it is impossible to distinguish an honest minerʼs hardware failure from malicious behavior, which is necessary to treat miners fairly. The size of the fault fees depend on the severity of the failure and the rewards that the miner is expected to earn from the sector to make sure incentives are aligned. The two types of sector storage fault fees are:
- Sector fault fee: This fee is paid per sector per day while the sector is in a faulty state. This fee is not paid the first day the system detects the fault allowing a one day grace period for recovery without fee. The size of the sector fault fee is slightly more than the amount the sector is expected to earn per day in block rewards. If a sector remains faulty for more than 42 consecutive days, the sector will pay a termination fee and be removed from the chain state. As storage miner reliability increases above a reasonable threshold, the risk posed by these fees decreases rapidly.
- Sector termination fee: A sector can be terminated before its expiration through automatic faults or miner decisions. A termination fee is charged that is, in principle, equivalent to how much a sector has earned so far, up to a limit in order to avoid discouraging long sector lifetimes. In an active termination, the miner decides to stop mining and they pay a fee to leave. In a fault termination, a sector is in a faulty state for too long, and the chain terminates the deal, returns unpaid deal fees to the client and penalizes the miner. Termination fee is currently capped at 90 days worth of block reward that a sector will earn. Miners are responsible for deciding to comply with local regulations, and may sometimes need to accept a termination fee for complying with content laws. Many of the concepts and parameters above make use of the notion of “how much a sector would have earned in a day” in order to understand and align incentives for participants. This concept is robustly tracked and extrapolated on chain.
Sector Recovery
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.sector.sector-recovery
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.sector.sector-recovery
Miners should try to recover faulty sectors in order to avoid paying the penalty, which is approximately equal to the block reward that the miner would receive from that sector. After fixing technical issues, the miner should call RecoveryDeclaration
and produce the WindowPoSt challenge in order to regain the power from that sector.
Note that if a sector is in a faulty state for 42 consecutive days it will be terminated and the miner will receive a penalty. The miner can terminate the sector themselves by calling TerminationDeclaration
, if they know that they cannot recover it, in which case they will receive a smaller penalty fee.
Both the RecoveryDeclaration
and the TerminationDeclaration
can be found in the
miner actor implementation.
Adding Storage
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.sector.adding_storage
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.sector.adding_storage
A Miner adds more storage in the form of Sectors. Adding more storage is a two-step process:
- PreCommitting a Sector: A Miner publishes a Sector’s SealedCID, through
miner.PreCommitSector
ofminer.PreCommitSectorBatch
, and makes a deposit. The Sector is now registered to the Miner, and the Miner must ProveCommit the Sector or lose their deposit. - ProveCommitting a Sector: The Miner provides a Proof of Replication (PoRep) for the Sector through miner.ProveCommitSector or miner.ProveCommitAggregate. This proof must be submitted AFTER a delay (the InteractiveEpoch), and BEFORE PreCommit expiration.
This two-step process provides assurance that the Miner’s PoRep actually proves that the Miner has replicated the Sector data and is generating proofs from it:
- ProveCommitments must happen AFTER the InteractiveEpoch (150 blocks after Sector PreCommit), as the randomness included at that epoch is used in the PoRep.
- ProveCommitments must happen BEFORE the PreCommit expiration, which is a boundary established to make sure Miners don’t have enough time to “fake” PoRep generation.
For each Sector successfully ProveCommitted, the Miner becomes responsible for continuously proving the existence of their Sectors’ data. In return, the Miner is awarded storage power.
Upgrading Sectors
-
State
stable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.sector.adding_storage
-
State
stable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.sector.adding_storage
Miners are granted storage power in exchange for the storage space they dedicate to Filecoin. Ideally, this storage space is used to store data on behalf of Clients, but there may not always be enough Clients to utilize all the space a Miner has to offer.
In order for a Miner to maximize storage power (and profit), they should take advantage of all available storage space immediately, even before they find enough Clients to use this space.
To facilitate this, there are two types of Sectors that may be sealed and ProveCommitted:
- Regular Sector: A Sector that contains Client data
- Committed Capacity (CC) Sector: A Sector with no data (all zeroes)
Miners are free to choose which types of Sectors to store. CC sectors, in particular, allow Miners to immediately make use of existing disk space: earning storage power and a higher chance at producing a block. Miners can decide if they should upgrade their CC sectors to take client deals or continue proving CC sectors. Currently, CC sectors store randomness by default in client implementation, but this does not preclude miners from storing any type of useful data that increase their private utility in CC sectors (as long as it is legal). The protocol expects that new use-cases and diversity will emerge out of such behaviour.
To incentivize Miners to hoard storage space and dedicate it to Filecoin, CC Sectors have a unique capability: they can be “upgraded” to Regular Sectors (also called “replacing a CC Sector”).
Miners upgrade their ProveCommitted CC Sectors by PreCommitting a Regular Sector, and specifying that it should replace an existing CC Sector. Once the Regular Sector is successfully ProveCommitted, it will replace the existing CC Sector. If the newly ProveCommitted Regular sector contains a Verified Client deal, i.e., a deal with higher Sector Quality, then the miner’s storage power will increase accordingly.
Upgrading capacity currently involves resealing, that is, creating a unique representation of the new data included in the Sector through a computationally intensive process. Looking ahead, committed capacity upgrades should eventually be possible without a reseal. A succinct and publicly verifiable proof that the committed capacity has been correctly replaced with replicated data should achieve this goal. However, this mechanism must be fully specified to preserve the security and incentives of the network before it can be implemented and is, therefore, left as a future improvement.
Storage Miner
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.storage_mining
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.storage_mining
Storage Mining Subsystem
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.storage_mining.storage-mining-subsystem
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.storage_mining.storage-mining-subsystem
The Filecoin Storage Mining Subsystem ensures a storage miner can effectively commit storage to the Filecoin protocol in order to both:
- Participate in the Filecoin Storage Market by taking on client data and participating in storage deals.
- Participate in Filecoin Storage Power Consensus by verifying and generating blocks to grow the Filecoin blockchain and earning block rewards and fees for doing so.
The above involves a number of steps to putting on and maintaining online storage, such as:
- Committing new storage (see Sector, Sector Sealing and PoRep)
- Continuously proving storage (see WinningPoSt and Window PoSt)
- Declaring storage faults and recovering from them.
Filecoin Proofs
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.storage_mining.filecoin-proofs
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.storage_mining.filecoin-proofs
Proof of Replication
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.storage_mining.proof-of-replication
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.storage_mining.proof-of-replication
A Proof of Replication (PoRep) is a proof that a Miner has correctly generated a unique replica of some underlying data.
In practice, the underlying data is the raw data contained in an Unsealed Sector, and a PoRep is a SNARK proof that the sealing process was performed correctly to produce a Sealed Sector (See Sealing a Sector).
It is important to note that the replica should not only be unique to the miner, but also to the time when a miner has actually created the replica, i.e., sealed the sector. This means that if the same miner produces a sealed sector out of the same raw data twice, then this would count as a different replica.
When Miners commit to storing data, they must first produce a valid Proof of Replication.
Proof of Spacetime
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.storage_mining.proof-of-spacetime
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.storage_mining.proof-of-spacetime
A Proof of Spacetime (aka PoSt) is a long-term assurance of a Miner’s continuous storage of their Sectors’ data. This is not a single proof, but a collection of proofs the Miner has submitted over time. Periodically, a Miner must add to these proofs by submitting a WindowPoSt:
- Fundamentally, a WindowPoSt is a collection of merkle proofs over the underlying data in a Miner’s Sectors.
- WindowPoSts bundle proofs of various leaves across groups of Sectors (called Partitions).
- These proofs are submitted as a single SNARK.
The historical and ongoing submission of WindowPoSts creates assurance that the Miner has been storing, and continues to store the Sectors they agreed to store in the storage deal.
Once a Miner successfully adds and ProveCommits a Sector, the Sector is assigned to a Deadline: a specific window of time during which PoSts must be submitted. The day is broken up into 48 individual Deadlines of 30 minutes each, and ProveCommitted Sectors are assigned to one of these 48 Deadlines.
- PoSts may only be submitted for the currently-active Deadline. Deadlines are open for 30 minutes, starting from the Deadline’s “Open” epoch and ending at its “Close” epoch.
- PoSts must incorporate randomness pulled from a random beacon. This randomness becomes publicly available at the Deadline’s “Challenge” epoch, which is 20 epochs prior to its “Open” epoch.
- Deadlines also have a
FaultCutoff
epoch, 70 epochs prior to its “Open” epoch. After this epoch, Faults can no longer be declared for the Deadline’s Sectors.
Miner Accounting
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.storage_mining.miner-accounting
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.storage_mining.miner-accounting
A Miner’s financial gain or loss is affected by the following three actions:
- Miners deposit tokens to act as collateral for their PreCommitted and ProveCommitted Sectors
- Miners earn tokens from block rewards, when they are elected to mine a new block and extend the blockchain.
- Miners lose tokens if they fail to prove storage of a sector and are given penalties as a result.
Balance Requirements
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
section-systems.filecoin_mining.storage_mining.balance-requirements
-
State
reliable
-
Theory Audit
wip
- Edit this section
-
section-systems.filecoin_mining.storage_mining.balance-requirements
A Miner’s token balance MUST cover ALL of the following:
- PreCommit Deposits: When a Miner PreCommits a Sector, they must supply a “precommit deposit” for the Sector, which acts as collateral. If the Sector is not ProveCommitted on time, this deposit is removed and burned.
- Initial Pledge: When a Miner ProveCommits a Sector, they must supply an “initial pledge” for the Sector, which acts as collateral. If the Sector is terminated, this deposit is removed and burned along with rewards earned by this sector up to a limit.
- Locked Funds: When a Miner receives tokens from block rewards, the tokens are locked and added to the Miner’s vesting table to be unlocked linearly over some future epochs.
Faults, Penalties and Fee Debt
-
State
reliable
-
Theory Audit
wip
-
Edit this section
-
-
State
reliable
-
Theory Audit
wip
- Edit this section
-