Motivation
When Bitcoin first deployed main-net in January 2009, there was no restriction on block size. Very quickly a trivial security exploit became apparent. A malicious miner could construct large blocks with a double spend and flood these proposed blocks to the network to slow all other miners down having to waste time with bad large blocks while in parallel mining small correct blocks out to gain more block reward than is fair. Thus to restore the security assumptions of the protocol, one megabyte fixed block sizes where introduced.
The drawback of a fixed block size, or course, is less Transactions Per Second (TPS).
Because the A-Block HFM protocol issues a single block to the mining network proposed by the Mempool nodes held under RAFT consensus, miners can work in a parallel manner to validate blocks in a k log ( $t$ ) manner %k is calculated based on number of miners mining and block size. Function must be determined
The principle idea behind accelerating validation is in noting that all miners receive the same Merkle root from the mempool RAFT. Thus, in principle, miners should be able to parallelize validation. If, for example, there are 20 000 transactions in a block and 1000 miners, each miner need only check 20 transactions. Thus the fact that the block was 20 megabytes in size doesn’t cause network congestion because each miner is only validating 40 kilobytes. Said another way, the network can process dynamic block sizes.
In practice we need to establish several methods that such a system does not introduce attack surfaces into validation.
Block bounties
The network transaction fee is comprised of three terms. The third term is a 0.00025 ABC$ payment per transaction for the winning miner of that block. These fees comprise the block bounty. Unlike the Bitcoin protocol, this fee is static and the same for any transaction. Below 4000 transactions per block, the block reward is very close to 30 coins for the winning miner. At 4000 transactions per block, each winning miner gets 31 coins, but at 1000 000 transactions per block, (Block sizes being slightly over 1 gigabyte) the block bounty is 250 ABC$. Taken with the block rewards, that amounts to 280 ABC$. This fee is set as an A-Block Improvement Proposal (AIP) and as such can be adjusted over time with the community’s vote. What it means for validation is that as blocks are small, miners are less. As blocks grow, miners increase in number. Because the validation payload is parallelizable and larger blocks attract more miners, the payloads remain roughly fixed over time. Miner’s just in time, not just in case is partly achieved through the block bounty.
Transaction numbering
When each miner is receiving less than the full transaction data they need to be able to reliably calculate whether the transaction they have received is the one that they were expecting. In other words, if they are given some Merkle tree data how can they be sure that it correlates to an expected transaction number? A niave solution is to prefix all transactions with an index and hash the result as the start to the Merkle tree generation process. However, there is no need to bloat the ledger storage node with unnecessary bytes as one can compute the required index as follows. The first transaction tx 1 is placed on the zeroth arm of the Merkle tree and the second transaction is placed on the first arm. As such we must not forget that Merkle tree positions are one out from the transaction number as shown in this table.
| Position 0 | Position 1 | Position 2 | … | Position n |
|---|---|---|---|---|
| tx 1 | tx 2 | tx 3 | … | tx n + 1 |
In figure 1 “Indexing Merkle leaves” we decorate a Merkle tree as follows. By convention starting at the root element and moving towards the leaves of the tree, all moves to the left are decorated with a “0” and all moves to the right are decorated with a “1”. If we are given TX_a we reach it from the Merkle root by following the 0 1 0 arrows. 010 is binary for 2 and consulting our position table, position 2 records transaction 3 thus TX_a is transaction number 3.
We now have all the tools we need to calculate a transaction number referencing figure 1 “calculate transaction number”. We request transaction 4 and are given the raw transaction data plus the additional hash values H(12), H(3) and H(5678). To compute (and validate) the merkle root H(12345678), the miner starts by computing the hash of TX_4. Next the Miner needs to concatenate H(3) with H(4) and compute the hash of the concatenation H(H(3)|H(4)). Importantly concatenation is order specific. H(H(3)|H(4)) is not equal to H(H(4)|H(3)). Thus because H(3) was given and we needed to compute H(H(3)|H(4)), it means that our transaction must sit to the right of the last merkle branch as it is shown in red to the right of the concatenation and a right move is decorated 1 so the last bit of our position index is 1. Next, because we where given H(12) and had to calculate H(34) and the concatenated hash will be H(H(12)|H(34)), what we are working with is again to the right. Therefore the second last bit must also be a 1. Because we are crawling leaf to root we are reading in the indexing bits backwards from last to first. Finally we have calculated H(1234) and where given H(5678) in the concatenation H(H(1234)|H(5678)), but this time what we calculated and hence are working with is to the LEFT of the concatenation and so it must lie on the 0 arm the the merkle tree. Thus the first bit must be 0. Reading the bit index then gives us 011 or 3 and position 3 on the Merkle tree stores transaction 4. Thus if the Merkle root we have computed is the Merkle root published by the ledger, we know for sure that we had been given transaction 4.
Figure 1: Computing transaction indexes
Mining sessions and sub-tree selection
Miners should not be able to choose which transactions they wish to validate nor should mempool nodes be able to choose what validation payloads are suitable. This is necessary to avoid collusion. However, all parties should have a simple method for being committed to sending and receiving sub-trees. The solution is to calculate the number of sub-trees $s$ (given the number of miners bidding for a mining session and the size of the block), read of the PoW hash value from the session request mod (s) to uniformly select at random a sub puzzle. This is sophisticated and simple in that one mining session costs more to generate than the current hash difficulty of mining. Thus spamming sessions to artificially reduce puzzle size to avoid proper validation costs more than the hashrate required to 51% attack the network. Second, the session requests include a recent UNiCORN value which timestamps the sessions in a cryptographically secure way preventing replay attacks. Finally, session requests are tied to block reward and so any rule-breaking on setting up bad sessions will automatically preclude an attacker from any potential reward. High cost, no reward and secure timestamps severely restrict sub tree attacks to a level of “harder to break than the network protocol”.
Preventing double spend attacks: Ordering with overlap
Naively implemented parallel validation of sub trees to validate the global tree introduces a trivial double spend attack. This attack is solved with ordering Merkle trees and adding sub puzzle overlap.
Figure 2: Ordering and overlap needed to parallelize Merkle tree validation
Looking at tab A in figure 2 we see a Merkle tree à la Bitcoin style. A single miner has a global view to all transactions in a block and can see that some coin payment from Alice to Bob TX_AB presents a double spend to the transaction where Alice pays Charlie TX_AC from the same coin supply. Thus a Bitcoin miner would never submit such a block to the network because the validating miners would reject it. However, when parallelizing validation in sub trees as illustrated in tab B of figure 2, naively carving up a Merkle tree between Miner X and Miner Y TX_AB and TX_AC become separated into two different sub trees. Thus miner X is unaware of the double spend in miner Y’s sub-tree and visa verse.
A partial solution is to order all transactions in a Merkle tree by the “from” public key address. Each miner now checks that the PubKey address of TX_1 is less than the PubKey address of TX_2 is less than the PubKey address of TX_3 and so on, for the full sub-tree they are validating. However there is no guarantee that a memppol node has not ordered the transactions in a malicious way as demonstrated in tab C of figure 2. Here, the mempool node has ordered each sub-tree such that local ordering is sequential but globally TX_2 (representing TX_AB) and TX_3 (representing TX_AC) are placed into adjacent sub-trees to again execute a double spend attack.
The full solution is the order all transactions AND to ensure that each sub-tree has at least one transaction overlap with neighbour sub-trees. In tab D and E miner X and miner Y overlap on transaction 4. As they have ensured overlap of at least one and have checked their local ordering and found no issues, they are guaranteed that globally the tree is ordered AND free from double spends while working in parallel on sub-tree validation.
Finally, in tabs F and G, we see a malicious mempool attempt to separate TX_2 from TX_3 in order to execute a double spend attack. Miner X, following the protocol does not detect any problems BUT miner Y sees that TX_5 precedes TX_3 and furthermore finds that TX_3 is non sequential thus they become convinced that TX_3 is a double spend and branch off the main mempool RAFT and start mining off of another RAFT.
Signalling acceptance
Miners do not work in a Pareto efficient scheme. They compete in a prisoner’s dilemma “nash equilibria” game and will not voluntarily or actively share information with the network unless there is a direct pay-off for them. The method by which a miner “Signals” their acceptance of a block must take into account these considerations.
In the HFM protocol, miner signaling works as follows:
- mempool nodes broadcast their data continuously as a policy rule. This allows relay nodes to detect if their transactions are being censored, if miner requests are being ignored or if collusion is occurring. Only mempool that are elected into a RAFT broadcast.
- Miner Y who uncovers the double spend immediately stops mining, prunes their UTXO seeding table, re-selects a RAFT, reports the error and the affected Merkle root to the new mempool RAFT, who, if they accept the error, adjust their UTXO seeding tables and start broadcasting.
- Other miners, such as miner X detect a new mempool RAFT broadcasting a new block citing the double spend error on the infected branch. If they agree that the error is legitimate, they too update their seeding UTXO tables and being mining off of the good block.
- The good block gains increasing hash rate support as no miner wants to bear the cost of mining off of a bad chain because all work they do will be orphaned once the majority of miners migrate to mining off of a good tip. Once this happens, the indifferent miners working off of a bad tip loose all of their coins on main net. Thus mining off of a good tip is selfish.
- Relay nodes working off of infected mempool nodes start loosing their transaction fees as their fees are not mined out on main chain, they therefore constantly pole all mempools in the UTXO seeding table to ensure they are connected to the right mempools
- When each application connects to a relay node or checks their friendly miner’s UTXO seeding tables they too become aware of the changes in mempool credentials and follow the heaviest chain.
- Thus the miner who detected the initial fault signals his disagreement with a bad block, or alternatively signals his acceptance by continuing to mine off of the UNiCORN selected Mempool RAFT’s proposed blocks.
Conclusion
Validation through mining on the HFM protocol follows the following procedure:
- Miner’s applying for a mining session with the network.
- A method exists that takes in the number of transactions in a block (block size) and the number of miners in a mining round and calculates the number of sub-trees s that ensure that at least one miner gets to see one sub-tree with a 99.99% certainty. This is the accepted error rate for standard Bitcoin implementations in consensus on 6 leading blocks longest chain rule.
- A miner sets their Session Request as a number SR. Then they compute SR mod (s) to determine which sub-tree to expect.
- Sub-tree 4 of 12, for example when there are 120 transactions in a block must be TX_40 through to TX_50 with the requirement of overlap taking the sub-tree to TX_39 through to TX_51.
- Miners will check that all transactions they received are sequential and that the computed transaction number matches the expected number.
- If miners confirm order, transaction number, “well-formedness” and policy enforcement, then they signal their acceptance by continuing to mine for the mempool RAFT elected by the UNiCORN RNG sequence.
- If a miner detects an error, they prune their locally held seeding UTXO table, re-select a new mempool RAFT, report error, and start mining off of a new tip.
- Miner unaware of a problem detect the number of mempool RAFTs active at any time and choose to follow the correctly proposed block, knowing that in time it will gain heaviest PoW.
- If two tips are both correct, miners follow the tip with the most work behind it.

