In short as RLP, RLP is the package Ethereum used to serialize all objects to the array of bytes.
It is described on Yellow Paper with many formulas and is very difficult to understand.
Because Ethereum is a decentralized blockchain, that enables the execution of smart contracts and the storage of data on the blockchain, its need to be serialized or converted into a binary format, stored in a minimal amount of space in the blockchain.
RLP is a prefix-based encoding schema that encodes arbitrarily structured binary data(byte arrays) in a way that is easy to encode and decode.
RLP algorithm works by recursively encoding a list of items.
An item is defined as follows:
- A string(byte array)
- A list of “items” itself
For example:
- A string(byte array), includes an empty string
- A list containing any number of string
- A complex data structure like
["cat", ["dog", "mouse"], [], ["""]]
Walk through into Yellow Paper(Appendix B)
We have three formulas that describe arbitrarily structured binary data(byte arrays):
- T: Arbitrarily structured binary data, is a set of byte arrays and structural sequences
- L: Set of all tree-like structures that are not a single leaf
- O: Set of 8-bit bytes
- B: Set of all sequences of bytes(bytes array or a leaf in tree)
- We use disjoint union to distinguish the empty byte array(in B) vs empty list(in L).
We define the RLP function as RLP through two sub-functions:
- The first handling of the byte arrays
- The second handle the sequences of further values
We will deep dive into the first function that handles the byte array, RLP(B). If the value to be serialized is a
byte array, the RLP(B) will take one of three forms:
- A single byte less than 128(decimal), the output is same the input
- If the array bytes contain fewer than 56 bytes, then the output is equal to the input prefixed by the byte
equal to the length of the array byte + 128 - Otherwise, the output is equal to the big-endian representation of the input length in front of the input and then preceded by (183 + the length of the big end of the input)
Second, we will see how the RLP(L) works. We use RLP(L) to encode each item, then concatenate the output.
- If the length is smaller than 56, the output is equal: 192 + length of item + item
- Otherwise, the output is equal: 247 + length of the big-endian of the length of item + the big-endian of the length of item + item
You can see s(x) is the recursive of RLP with each item.
RLP algorithm as code like(example from ethereum.org):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | <span class="token keyword">def</span> <span class="token function">rlp_encode</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">if</span> <span class="token builtin">isinstance</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span><span class="token builtin">str</span><span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token operator">//</span> <span class="token number">0x80</span> <span class="token operator">=</span> <span class="token number">128</span><span class="token punctuation">(</span>decimal<span class="token punctuation">)</span> <span class="token keyword">if</span> <span class="token builtin">len</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span> <span class="token operator">==</span> <span class="token number">1</span> <span class="token keyword">and</span> <span class="token builtin">ord</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span> <span class="token operator"><</span> <span class="token number">0x80</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token builtin">input</span> <span class="token keyword">else</span><span class="token punctuation">:</span> <span class="token keyword">return</span> encode_length<span class="token punctuation">(</span><span class="token builtin">len</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token number">0x80</span><span class="token punctuation">)</span> <span class="token operator">+</span> <span class="token builtin">input</span> <span class="token keyword">elif</span> <span class="token builtin">isinstance</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span><span class="token builtin">list</span><span class="token punctuation">)</span><span class="token punctuation">:</span> output <span class="token operator">=</span> <span class="token string">''</span> <span class="token keyword">for</span> item <span class="token keyword">in</span> <span class="token builtin">input</span><span class="token punctuation">:</span> output <span class="token operator">+=</span> rlp_encode<span class="token punctuation">(</span>item<span class="token punctuation">)</span> <span class="token keyword">return</span> encode_length<span class="token punctuation">(</span><span class="token builtin">len</span><span class="token punctuation">(</span>output<span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token number">0xc0</span><span class="token punctuation">)</span> <span class="token operator">+</span> output <span class="token keyword">def</span> <span class="token function">encode_length</span><span class="token punctuation">(</span>L<span class="token punctuation">,</span>offset<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">if</span> L <span class="token operator"><</span> <span class="token number">56</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token builtin">chr</span><span class="token punctuation">(</span>L <span class="token operator">+</span> offset<span class="token punctuation">)</span> <span class="token keyword">elif</span> L <span class="token operator"><</span> <span class="token number">256</span><span class="token operator">**</span><span class="token number">8</span><span class="token punctuation">:</span> BL <span class="token operator">=</span> to_binary<span class="token punctuation">(</span>L<span class="token punctuation">)</span> <span class="token keyword">return</span> <span class="token builtin">chr</span><span class="token punctuation">(</span><span class="token builtin">len</span><span class="token punctuation">(</span>BL<span class="token punctuation">)</span> <span class="token operator">+</span> offset <span class="token operator">+</span> <span class="token number">55</span><span class="token punctuation">)</span> <span class="token operator">+</span> BL <span class="token keyword">else</span><span class="token punctuation">:</span> <span class="token keyword">raise</span> Exception<span class="token punctuation">(</span><span class="token string">"input too long"</span><span class="token punctuation">)</span> <span class="token keyword">def</span> <span class="token function">to_binary</span><span class="token punctuation">(</span>x<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">if</span> x <span class="token operator">==</span> <span class="token number">0</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token string">''</span> <span class="token keyword">else</span><span class="token punctuation">:</span> <span class="token keyword">return</span> to_binary<span class="token punctuation">(</span><span class="token builtin">int</span><span class="token punctuation">(</span>x <span class="token operator">/</span> <span class="token number">256</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token operator">+</span> <span class="token builtin">chr</span><span class="token punctuation">(</span>x <span class="token operator">%</span> <span class="token number">256</span><span class="token punctuation">)</span> |
You can see the example in the Ethereum documentation with some inputs:
- A String “ethereum” => [“0x88”, “e”, “t”, “h”, “e”, “r”, “e”, “u”, “m”] => because the length of this
string is 8 characters, is smaller than 56. So the output is encoded_length(8, 128) + input = chr(136) + “ethereum” = [“0x88”, “e”, “t”, “h”, “e”, “r”, “e”, “u”, “m”] - A list [“ethereum”, “foundation”]:
- Same as the example above, we got the output of
rlp_encode("ethereum") = ["0x88", "e", "t", "h", "e", "r", "e", "u", "m"]
- And
rlp_encode("foundation") = ["0x8A", "f", "o", "u", "n", "d", "a", "t", "i", "o", "n"]
- So, the output is
rlp_encode(["ethereum", "foundation"]) = encode_length(20, 192) + ["0x88", "e", "t", "h", "e", "r", "e", "u", "m", "0x8A", "f", "o", "u", "n", "d", "a", "t", "i", "o", "n"] = ["0xD4", "0x88", "e", "t", "h", "e", "r", "e", "u", "m", "0x8A", "f", "o", "u", "n", "d", "a", "t", "i", "o", "n"]
- Same as the example above, we got the output of
RLP decoding
Because of the rules of RLP encoding, the input of RLP decoding is an array of binary data.
- Depending on the first byte in the input, we can determine the data type and the length of the data and offset.
- Depending on the data type and offset of data, decode the data correspondingly.
- Continue the loop to decode the remain of input.
With the RLP formulas, we can determine to rules of decoding the data type and offset by the following:
- If the range of the first byte is from [0x00, 0x7f], and the length of the input is 1, so the data type is a string and the data is the string itself.
- If the range of the first byte is from [0x80, 0xb7], the data type is a string, and the length of the string is equal to the first byte minus 0x80
- If the range of the first byte is [0xb8, 0xbf], and the length of the string whose length in bytes is equal to the first byte minus 0xb7 follows the first byte, and the string follows the length of the string;
- If the range of the first byte is [0xc0, 0xf7], and the concatenation of the RLP encodings of all items of the list which the total payload is equal to the first byte minus 0xc0 follows the first byte;
- If the range of the first byte is [0xf8, 0xff], and the total payload of the list whose length is equal to the first byte minus 0xf7 follows the first byte, and the concatenation of the RLP encodings of all items of the list follows the total payload of the list;
Source: Ethereum Docs
The pseudo code from Ethereum Docs:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | <span class="token keyword">def</span> <span class="token function">rlp_decode</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">if</span> <span class="token builtin">len</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span> <span class="token operator">==</span> <span class="token number">0</span><span class="token punctuation">:</span> <span class="token keyword">return</span> output <span class="token operator">=</span> <span class="token string">''</span> <span class="token punctuation">(</span>offset<span class="token punctuation">,</span> dataLen<span class="token punctuation">,</span> <span class="token builtin">type</span><span class="token punctuation">)</span> <span class="token operator">=</span> decode_length<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span> <span class="token keyword">if</span> <span class="token builtin">type</span> <span class="token keyword">is</span> <span class="token builtin">str</span><span class="token punctuation">:</span> output <span class="token operator">=</span> instantiate_str<span class="token punctuation">(</span>substr<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span> offset<span class="token punctuation">,</span> dataLen<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">elif</span> <span class="token builtin">type</span> <span class="token keyword">is</span> <span class="token builtin">list</span><span class="token punctuation">:</span> output <span class="token operator">=</span> instantiate_list<span class="token punctuation">(</span>substr<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span> offset<span class="token punctuation">,</span> dataLen<span class="token punctuation">)</span><span class="token punctuation">)</span> output <span class="token operator">+</span> rlp_decode<span class="token punctuation">(</span>substr<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span> offset <span class="token operator">+</span> dataLen<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">return</span> output <span class="token keyword">def</span> <span class="token function">decode_length</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span><span class="token punctuation">:</span> length <span class="token operator">=</span> <span class="token builtin">len</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">)</span> <span class="token keyword">if</span> length <span class="token operator">==</span> <span class="token number">0</span><span class="token punctuation">:</span> <span class="token keyword">raise</span> Exception<span class="token punctuation">(</span><span class="token string">"input is null"</span><span class="token punctuation">)</span> prefix <span class="token operator">=</span> <span class="token builtin">ord</span><span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span><span class="token punctuation">)</span> <span class="token keyword">if</span> prefix <span class="token operator"><=</span> <span class="token number">0x7f</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">,</span> <span class="token number">1</span><span class="token punctuation">,</span> <span class="token builtin">str</span><span class="token punctuation">)</span> <span class="token keyword">elif</span> prefix <span class="token operator"><=</span> <span class="token number">0xb7</span> <span class="token keyword">and</span> length <span class="token operator">></span> prefix <span class="token operator">-</span> <span class="token number">0x80</span><span class="token punctuation">:</span> strLen <span class="token operator">=</span> prefix <span class="token operator">-</span> <span class="token number">0x80</span> <span class="token keyword">return</span> <span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">,</span> strLen<span class="token punctuation">,</span> <span class="token builtin">str</span><span class="token punctuation">)</span> <span class="token keyword">elif</span> prefix <span class="token operator"><=</span> <span class="token number">0xbf</span> <span class="token keyword">and</span> length <span class="token operator">></span> prefix <span class="token operator">-</span> <span class="token number">0xb7</span> <span class="token keyword">and</span> length <span class="token operator">></span> prefix <span class="token operator">-</span> <span class="token number">0xb7</span> <span class="token operator">+</span> to_integer<span class="token punctuation">(</span>substr<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span> <span class="token number">1</span><span class="token punctuation">,</span> prefix <span class="token operator">-</span> <span class="token number">0xb7</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">:</span> lenOfStrLen <span class="token operator">=</span> prefix <span class="token operator">-</span> <span class="token number">0xb7</span> strLen <span class="token operator">=</span> to_integer<span class="token punctuation">(</span>substr<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span> <span class="token number">1</span><span class="token punctuation">,</span> lenOfStrLen<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">return</span> <span class="token punctuation">(</span><span class="token number">1</span> <span class="token operator">+</span> lenOfStrLen<span class="token punctuation">,</span> strLen<span class="token punctuation">,</span> <span class="token builtin">str</span><span class="token punctuation">)</span> <span class="token keyword">elif</span> prefix <span class="token operator"><=</span> <span class="token number">0xf7</span> <span class="token keyword">and</span> length <span class="token operator">></span> prefix <span class="token operator">-</span> <span class="token number">0xc0</span><span class="token punctuation">:</span> listLen <span class="token operator">=</span> prefix <span class="token operator">-</span> <span class="token number">0xc0</span><span class="token punctuation">;</span> <span class="token keyword">return</span> <span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">,</span> listLen<span class="token punctuation">,</span> <span class="token builtin">list</span><span class="token punctuation">)</span> <span class="token keyword">elif</span> prefix <span class="token operator"><=</span> <span class="token number">0xff</span> <span class="token keyword">and</span> length <span class="token operator">></span> prefix <span class="token operator">-</span> <span class="token number">0xf7</span> <span class="token keyword">and</span> length <span class="token operator">></span> prefix <span class="token operator">-</span> <span class="token number">0xf7</span> <span class="token operator">+</span> to_integer<span class="token punctuation">(</span>substr<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span> <span class="token number">1</span><span class="token punctuation">,</span> prefix <span class="token operator">-</span> <span class="token number">0xf7</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">:</span> lenOfListLen <span class="token operator">=</span> prefix <span class="token operator">-</span> <span class="token number">0xf7</span> listLen <span class="token operator">=</span> to_integer<span class="token punctuation">(</span>substr<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token punctuation">,</span> <span class="token number">1</span><span class="token punctuation">,</span> lenOfListLen<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">return</span> <span class="token punctuation">(</span><span class="token number">1</span> <span class="token operator">+</span> lenOfListLen<span class="token punctuation">,</span> listLen<span class="token punctuation">,</span> <span class="token builtin">list</span><span class="token punctuation">)</span> <span class="token keyword">else</span><span class="token punctuation">:</span> <span class="token keyword">raise</span> Exception<span class="token punctuation">(</span><span class="token string">"input does not conform to RLP encoding form"</span><span class="token punctuation">)</span> <span class="token keyword">def</span> <span class="token function">to_integer</span><span class="token punctuation">(</span>b<span class="token punctuation">)</span><span class="token punctuation">:</span> length <span class="token operator">=</span> <span class="token builtin">len</span><span class="token punctuation">(</span>b<span class="token punctuation">)</span> <span class="token keyword">if</span> length <span class="token operator">==</span> <span class="token number">0</span><span class="token punctuation">:</span> <span class="token keyword">raise</span> Exception<span class="token punctuation">(</span><span class="token string">"input is null"</span><span class="token punctuation">)</span> <span class="token keyword">elif</span> length <span class="token operator">==</span> <span class="token number">1</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token builtin">ord</span><span class="token punctuation">(</span>b<span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span><span class="token punctuation">)</span> <span class="token keyword">else</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token builtin">ord</span><span class="token punctuation">(</span>substr<span class="token punctuation">(</span>b<span class="token punctuation">,</span> <span class="token operator">-</span><span class="token number">1</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token operator">+</span> to_integer<span class="token punctuation">(</span>substr<span class="token punctuation">(</span>b<span class="token punctuation">,</span> <span class="token number">0</span><span class="token punctuation">,</span> <span class="token operator">-</span><span class="token number">1</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token operator">*</span> <span class="token number">256</span> |
Very difficult to fully understand with these formulas, so we need to debug to know what the output
when we use RLP to encode/decode arbitrarily structured binary data.
Go-ethereum RLP
If you don’t familiar with Golang, you can read other versions written by Typescript
With me, this version is easier to understand than the original version written by Golang
RLP package structures:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | ├── decode<span class="token punctuation">.</span>go ├── decode_tail_test<span class="token punctuation">.</span>go ├── decode_test<span class="token punctuation">.</span>go ├── doc<span class="token punctuation">.</span>go ├── encbuffer<span class="token punctuation">.</span>go ├── encbuffer_example_test<span class="token punctuation">.</span>go ├── encode<span class="token punctuation">.</span>go ├── encode_test<span class="token punctuation">.</span>go ├── encoder_example_test<span class="token punctuation">.</span>go ├── internal │ └── rlpstruct │ └── rlpstruct<span class="token punctuation">.</span>go ├── iterator<span class="token punctuation">.</span>go ├── iterator_test<span class="token punctuation">.</span>go ├── raw<span class="token punctuation">.</span>go ├── raw_test<span class="token punctuation">.</span>go ├── rlpgen │ ├── gen<span class="token punctuation">.</span>go │ ├── gen_test<span class="token punctuation">.</span>go │ ├── main<span class="token punctuation">.</span>go │ ├── testdata │ │ ├── bigint<span class="token punctuation">.</span>in<span class="token punctuation">.</span>txt │ │ ├── bigint<span class="token punctuation">.</span>out<span class="token punctuation">.</span>txt │ │ ├── nil<span class="token punctuation">.</span>in<span class="token punctuation">.</span>txt │ │ ├── nil<span class="token punctuation">.</span>out<span class="token punctuation">.</span>txt │ │ ├── optional<span class="token punctuation">.</span>in<span class="token punctuation">.</span>txt │ │ ├── optional<span class="token punctuation">.</span>out<span class="token punctuation">.</span>txt │ │ ├── rawvalue<span class="token punctuation">.</span>in<span class="token punctuation">.</span>txt │ │ ├── rawvalue<span class="token punctuation">.</span>out<span class="token punctuation">.</span>txt │ │ ├── uint256<span class="token punctuation">.</span>in<span class="token punctuation">.</span>txt │ │ ├── uint256<span class="token punctuation">.</span>out<span class="token punctuation">.</span>txt │ │ ├── uints<span class="token punctuation">.</span>in<span class="token punctuation">.</span>txt │ │ └── uints<span class="token punctuation">.</span>out<span class="token punctuation">.</span>txt │ └── types<span class="token punctuation">.</span>go ├── safe<span class="token punctuation">.</span>go ├── typecache<span class="token punctuation">.</span>go └── unsafe<span class="token punctuation">.</span>go |
encode.go
First, let’s see the Encode
interface
1 2 3 4 | <span class="token keyword">type</span> Encoder <span class="token keyword">interface</span> <span class="token punctuation">{</span> <span class="token function">EncodeRLP</span><span class="token punctuation">(</span>io<span class="token punctuation">.</span>Writer<span class="token punctuation">)</span> <span class="token builtin">error</span> <span class="token punctuation">}</span> |
This interface has only one function, that takes io.Writer
as the input, and write the output to io.Writer
directly.
Everything is stored on Ethereum blockchain so uses RLP to encode. There are so many implementations:
Next, we have a function called func Encode(w io.Writer, val interface{}) error
, that will take the w
and v
to encode to binary data.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | <span class="token keyword">func</span> <span class="token function">Encode</span><span class="token punctuation">(</span>w io<span class="token punctuation">.</span>Writer<span class="token punctuation">,</span> val <span class="token keyword">interface</span><span class="token punctuation">{</span><span class="token punctuation">}</span><span class="token punctuation">)</span> <span class="token builtin">error</span> <span class="token punctuation">{</span> <span class="token comment">// Optimization: reuse *encBuffer when called by EncodeRLP.</span> <span class="token keyword">if</span> buf <span class="token operator">:=</span> <span class="token function">encBufferFromWriter</span><span class="token punctuation">(</span>w<span class="token punctuation">)</span><span class="token punctuation">;</span> buf <span class="token operator">!=</span> <span class="token boolean">nil</span> <span class="token punctuation">{</span> <span class="token keyword">return</span> buf<span class="token punctuation">.</span><span class="token function">encode</span><span class="token punctuation">(</span>val<span class="token punctuation">)</span> <span class="token punctuation">}</span> buf <span class="token operator">:=</span> <span class="token function">getEncBuffer</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">defer</span> encBufferPool<span class="token punctuation">.</span><span class="token function">Put</span><span class="token punctuation">(</span>buf<span class="token punctuation">)</span> <span class="token keyword">if</span> err <span class="token operator">:=</span> buf<span class="token punctuation">.</span><span class="token function">encode</span><span class="token punctuation">(</span>val<span class="token punctuation">)</span><span class="token punctuation">;</span> err <span class="token operator">!=</span> <span class="token boolean">nil</span> <span class="token punctuation">{</span> <span class="token keyword">return</span> err <span class="token punctuation">}</span> <span class="token keyword">return</span> buf<span class="token punctuation">.</span><span class="token function">writeTo</span><span class="token punctuation">(</span>w<span class="token punctuation">)</span> <span class="token punctuation">}</span> |
This is main function that is called by the implementations to encode the input. Go-ethereum use encBufferFromWriter
to reduce the allocation, we will go into it.
1 2 3 4 5 6 7 8 9 10 11 12 13 | <span class="token keyword">func</span> <span class="token function">encBufferFromWriter</span><span class="token punctuation">(</span>w io<span class="token punctuation">.</span>Writer<span class="token punctuation">)</span> <span class="token operator">*</span>encBuffer <span class="token punctuation">{</span> <span class="token keyword">switch</span> w <span class="token operator">:=</span> w<span class="token punctuation">.</span><span class="token punctuation">(</span><span class="token keyword">type</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> <span class="token keyword">case</span> EncoderBuffer<span class="token punctuation">:</span> <span class="token keyword">return</span> w<span class="token punctuation">.</span>buf <span class="token keyword">case</span> <span class="token operator">*</span>EncoderBuffer<span class="token punctuation">:</span> <span class="token keyword">return</span> w<span class="token punctuation">.</span>buf <span class="token keyword">case</span> <span class="token operator">*</span>encBuffer<span class="token punctuation">:</span> <span class="token keyword">return</span> w <span class="token keyword">default</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token boolean">nil</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> |
The main reason because the authors are using encBuffer
struct to encode the input data. So, if the w
is encBuffer
type, we reuse it to encode the input.
But I have a question, why they need to create a new struct encBuffer
to encode? Let’s see. In some previous of Go-ethereum, they don’t use this struct.
The struct encBuffer
is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | <span class="token comment">// EncoderBuffer is a buffer for incremental encoding.</span> <span class="token comment">//</span> <span class="token comment">// The zero value is NOT ready for use. To get a usable buffer,</span> <span class="token comment">// create it using NewEncoderBuffer or call Reset.</span> <span class="token keyword">type</span> EncoderBuffer <span class="token keyword">struct</span> <span class="token punctuation">{</span> buf <span class="token operator">*</span>encBuffer dst io<span class="token punctuation">.</span>Writer ownBuffer <span class="token builtin">bool</span> <span class="token punctuation">}</span> <span class="token keyword">type</span> listhead <span class="token keyword">struct</span> <span class="token punctuation">{</span> offset <span class="token builtin">int</span> <span class="token comment">// index of this header in string data</span> size <span class="token builtin">int</span> <span class="token comment">// total size of encoded data (including list headers)</span> <span class="token punctuation">}</span> <span class="token keyword">type</span> encBuffer <span class="token keyword">struct</span> <span class="token punctuation">{</span> str <span class="token punctuation">[</span><span class="token punctuation">]</span><span class="token builtin">byte</span> <span class="token comment">// string data, contains everything except list headers</span> lheads <span class="token punctuation">[</span><span class="token punctuation">]</span>listhead <span class="token comment">// all list headers</span> lhsize <span class="token builtin">int</span> <span class="token comment">// sum of sizes of all encoded list headers</span> sizebuf <span class="token punctuation">[</span><span class="token number">9</span><span class="token punctuation">]</span><span class="token builtin">byte</span> <span class="token comment">// auxiliary buffer for uint encoding</span> <span class="token punctuation">}</span> |
and in encbuffer.go
, we have a sync pool that stores encBuffer
1 2 3 4 5 6 | <span class="token comment">// The global encBuffer pool.</span> <span class="token keyword">var</span> encBufferPool <span class="token operator">=</span> sync<span class="token punctuation">.</span>Pool<span class="token punctuation">{</span> New<span class="token punctuation">:</span> <span class="token keyword">func</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">interface</span><span class="token punctuation">{</span><span class="token punctuation">}</span> <span class="token punctuation">{</span> <span class="token keyword">return</span> <span class="token function">new</span><span class="token punctuation">(</span>encBuffer<span class="token punctuation">)</span> <span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token punctuation">}</span> |
In the conclusion, we have some points:
go-ethereum
useencBuffer
to store the binary output that are encoded- They use
a pool
to storeencBuffer
to reuse in the next step
Next step, we have a function buf.encode(val)
to encode val
based on what’s type of val
1 2 3 4 5 6 7 8 9 | <span class="token keyword">func</span> <span class="token punctuation">(</span>buf <span class="token operator">*</span>encBuffer<span class="token punctuation">)</span> <span class="token function">encode</span><span class="token punctuation">(</span>val <span class="token keyword">interface</span><span class="token punctuation">{</span><span class="token punctuation">}</span><span class="token punctuation">)</span> <span class="token builtin">error</span> <span class="token punctuation">{</span> rval <span class="token operator">:=</span> reflect<span class="token punctuation">.</span><span class="token function">ValueOf</span><span class="token punctuation">(</span>val<span class="token punctuation">)</span> writer<span class="token punctuation">,</span> err <span class="token operator">:=</span> <span class="token function">cachedWriter</span><span class="token punctuation">(</span>rval<span class="token punctuation">.</span><span class="token function">Type</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">if</span> err <span class="token operator">!=</span> <span class="token boolean">nil</span> <span class="token punctuation">{</span> <span class="token keyword">return</span> err <span class="token punctuation">}</span> <span class="token keyword">return</span> <span class="token function">writer</span><span class="token punctuation">(</span>rval<span class="token punctuation">,</span> buf<span class="token punctuation">)</span> <span class="token punctuation">}</span> |
You can see, go-ethereum
uses reflection and encodes RLP based on the Go type of the value. The writer
is a function that
will encode based on the type and write to encBuffer
We have other function cachedWriter
that will return the writer
function with the Go type.
1 2 3 4 5 | <span class="token keyword">func</span> <span class="token function">cachedWriter</span><span class="token punctuation">(</span>typ reflect<span class="token punctuation">.</span>Type<span class="token punctuation">)</span> <span class="token punctuation">(</span>writer<span class="token punctuation">,</span> <span class="token builtin">error</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> info <span class="token operator">:=</span> theTC<span class="token punctuation">.</span><span class="token function">info</span><span class="token punctuation">(</span>typ<span class="token punctuation">)</span> <span class="token keyword">return</span> info<span class="token punctuation">.</span>writer<span class="token punctuation">,</span> info<span class="token punctuation">.</span>writerErr <span class="token punctuation">}</span> |
They use typeCache
struct to cache the writer of the Go type. For more detail, you can go into typecache.go
to see.
1 2 3 4 5 6 7 8 | <span class="token keyword">type</span> typeCache <span class="token keyword">struct</span> <span class="token punctuation">{</span> cur atomic<span class="token punctuation">.</span>Value <span class="token comment">// This lock synchronizes writers.</span> mu sync<span class="token punctuation">.</span>Mutex next <span class="token keyword">map</span><span class="token punctuation">[</span>typekey<span class="token punctuation">]</span><span class="token operator">*</span>typeinfo <span class="token punctuation">}</span> |
We have:
cur
: to store the map writer. The value ofcur
ismap[typekey]*typeinfo
mu
: usesync.Mutex
to lock the synchronizes writernext
: the map stores the next map
go-ethereum
use map to store the writer function based on the Go type. This map will store in-mem when the geth
is running.
Walk through more functions inside theTC.info(typ)
, we will see the main function that determine what’s the writer function
based on the typ
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | <span class="token comment">// encode.go</span> <span class="token keyword">func</span> <span class="token function">makeWriter</span><span class="token punctuation">(</span>typ reflect<span class="token punctuation">.</span>Type<span class="token punctuation">,</span> ts rlpstruct<span class="token punctuation">.</span>Tags<span class="token punctuation">)</span> <span class="token punctuation">(</span>writer<span class="token punctuation">,</span> <span class="token builtin">error</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> kind <span class="token operator">:=</span> typ<span class="token punctuation">.</span><span class="token function">Kind</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">switch</span> <span class="token punctuation">{</span> <span class="token keyword">case</span> typ <span class="token operator">==</span> rawValueType<span class="token punctuation">:</span> <span class="token keyword">return</span> writeRawValue<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ<span class="token punctuation">.</span><span class="token function">AssignableTo</span><span class="token punctuation">(</span>reflect<span class="token punctuation">.</span><span class="token function">PtrTo</span><span class="token punctuation">(</span>bigInt<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> writeBigIntPtr<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ<span class="token punctuation">.</span><span class="token function">AssignableTo</span><span class="token punctuation">(</span>bigInt<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> writeBigIntNoPtr<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ <span class="token operator">==</span> reflect<span class="token punctuation">.</span><span class="token function">PtrTo</span><span class="token punctuation">(</span>u256Int<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> writeU256IntPtr<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ <span class="token operator">==</span> u256Int<span class="token punctuation">:</span> <span class="token keyword">return</span> writeU256IntNoPtr<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Ptr<span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makePtrWriter</span><span class="token punctuation">(</span>typ<span class="token punctuation">,</span> ts<span class="token punctuation">)</span> <span class="token keyword">case</span> reflect<span class="token punctuation">.</span><span class="token function">PtrTo</span><span class="token punctuation">(</span>typ<span class="token punctuation">)</span><span class="token punctuation">.</span><span class="token function">Implements</span><span class="token punctuation">(</span>encoderInterface<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makeEncoderWriter</span><span class="token punctuation">(</span>typ<span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> <span class="token function">isUint</span><span class="token punctuation">(</span>kind<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> writeUint<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Bool<span class="token punctuation">:</span> <span class="token keyword">return</span> writeBool<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>String<span class="token punctuation">:</span> <span class="token keyword">return</span> writeString<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Slice <span class="token operator">&&</span> <span class="token function">isByte</span><span class="token punctuation">(</span>typ<span class="token punctuation">.</span><span class="token function">Elem</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> writeBytes<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Array <span class="token operator">&&</span> <span class="token function">isByte</span><span class="token punctuation">(</span>typ<span class="token punctuation">.</span><span class="token function">Elem</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makeByteArrayWriter</span><span class="token punctuation">(</span>typ<span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Slice <span class="token operator">||</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Array<span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makeSliceWriter</span><span class="token punctuation">(</span>typ<span class="token punctuation">,</span> ts<span class="token punctuation">)</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Struct<span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makeStructWriter</span><span class="token punctuation">(</span>typ<span class="token punctuation">)</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Interface<span class="token punctuation">:</span> <span class="token keyword">return</span> writeInterface<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">default</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token boolean">nil</span><span class="token punctuation">,</span> fmt<span class="token punctuation">.</span><span class="token function">Errorf</span><span class="token punctuation">(</span><span class="token string">"rlp: type %v is not RLP-serializable"</span><span class="token punctuation">,</span> typ<span class="token punctuation">)</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> |
In the conclusion, we have some points:
go-ethereum
use the in-mem map to cache the writer encode function based on the Go typego-etherem
useencBuffer
struct to store the binary data that are encoded bywriter
- After we encoded the input, they will write the output back into
w
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | <span class="token comment">// writeTo writes the encoder output to w.</span> <span class="token keyword">func</span> <span class="token punctuation">(</span>buf <span class="token operator">*</span>encBuffer<span class="token punctuation">)</span> <span class="token function">writeTo</span><span class="token punctuation">(</span>w io<span class="token punctuation">.</span>Writer<span class="token punctuation">)</span> <span class="token punctuation">(</span>err <span class="token builtin">error</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> strpos <span class="token operator">:=</span> <span class="token number">0</span> <span class="token keyword">for</span> <span class="token boolean">_</span><span class="token punctuation">,</span> head <span class="token operator">:=</span> <span class="token keyword">range</span> buf<span class="token punctuation">.</span>lheads <span class="token punctuation">{</span> <span class="token comment">// write string data before header</span> <span class="token keyword">if</span> head<span class="token punctuation">.</span>offset<span class="token operator">-</span>strpos <span class="token operator">></span> <span class="token number">0</span> <span class="token punctuation">{</span> n<span class="token punctuation">,</span> err <span class="token operator">:=</span> w<span class="token punctuation">.</span><span class="token function">Write</span><span class="token punctuation">(</span>buf<span class="token punctuation">.</span>str<span class="token punctuation">[</span>strpos<span class="token punctuation">:</span>head<span class="token punctuation">.</span>offset<span class="token punctuation">]</span><span class="token punctuation">)</span> strpos <span class="token operator">+=</span> n <span class="token keyword">if</span> err <span class="token operator">!=</span> <span class="token boolean">nil</span> <span class="token punctuation">{</span> <span class="token keyword">return</span> err <span class="token punctuation">}</span> <span class="token punctuation">}</span> <span class="token comment">// write the header</span> enc <span class="token operator">:=</span> head<span class="token punctuation">.</span><span class="token function">encode</span><span class="token punctuation">(</span>buf<span class="token punctuation">.</span>sizebuf<span class="token punctuation">[</span><span class="token punctuation">:</span><span class="token punctuation">]</span><span class="token punctuation">)</span> <span class="token keyword">if</span> <span class="token boolean">_</span><span class="token punctuation">,</span> err <span class="token operator">=</span> w<span class="token punctuation">.</span><span class="token function">Write</span><span class="token punctuation">(</span>enc<span class="token punctuation">)</span><span class="token punctuation">;</span> err <span class="token operator">!=</span> <span class="token boolean">nil</span> <span class="token punctuation">{</span> <span class="token keyword">return</span> err <span class="token punctuation">}</span> <span class="token punctuation">}</span> <span class="token keyword">if</span> strpos <span class="token operator"><</span> <span class="token function">len</span><span class="token punctuation">(</span>buf<span class="token punctuation">.</span>str<span class="token punctuation">)</span> <span class="token punctuation">{</span> <span class="token comment">// write string data after the last list header</span> <span class="token boolean">_</span><span class="token punctuation">,</span> err <span class="token operator">=</span> w<span class="token punctuation">.</span><span class="token function">Write</span><span class="token punctuation">(</span>buf<span class="token punctuation">.</span>str<span class="token punctuation">[</span>strpos<span class="token punctuation">:</span><span class="token punctuation">]</span><span class="token punctuation">)</span> <span class="token punctuation">}</span> <span class="token keyword">return</span> err <span class="token punctuation">}</span> |
RLP package written by Go is very difficult to understand than
Typescript
version .
But they add more technical in there to reduce the allocation, and reduce the time of encoding by using the in-mem map to store
thewriter
encode function based on the Go type.
decode.go
We have the same approach as encode
with Decode interface, but now, the input of decoding is Stream
, not io.Writer
like encoding.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | <span class="token comment">// ByteReader must be implemented by any input reader for a Stream. It</span> <span class="token comment">// is implemented by e.g. bufio.Reader and bytes.Reader.</span> <span class="token keyword">type</span> ByteReader <span class="token keyword">interface</span> <span class="token punctuation">{</span> io<span class="token punctuation">.</span>Reader io<span class="token punctuation">.</span>ByteReader <span class="token punctuation">}</span> <span class="token comment">// Stream can be used for piecemeal decoding of an input stream. This</span> <span class="token comment">// is useful if the input is very large or if the decoding rules for a</span> <span class="token comment">// type depend on the input structure. Stream does not keep an</span> <span class="token comment">// internal buffer. After decoding a value, the input reader will be</span> <span class="token comment">// positioned just before the type information for the next value.</span> <span class="token comment">//</span> <span class="token comment">// When decoding a list and the input position reaches the declared</span> <span class="token comment">// length of the list, all operations will return error EOL.</span> <span class="token comment">// The end of the list must be acknowledged using ListEnd to continue</span> <span class="token comment">// reading the enclosing list.</span> <span class="token comment">//</span> <span class="token comment">// Stream is not safe for concurrent use.</span> <span class="token keyword">type</span> Stream <span class="token keyword">struct</span> <span class="token punctuation">{</span> r ByteReader remaining <span class="token builtin">uint64</span> <span class="token comment">// number of bytes remaining to be read from r</span> size <span class="token builtin">uint64</span> <span class="token comment">// size of value ahead</span> kinderr <span class="token builtin">error</span> <span class="token comment">// error from last readKind</span> stack <span class="token punctuation">[</span><span class="token punctuation">]</span><span class="token builtin">uint64</span> <span class="token comment">// list sizes</span> uintbuf <span class="token punctuation">[</span><span class="token number">32</span><span class="token punctuation">]</span><span class="token builtin">byte</span> <span class="token comment">// auxiliary buffer for integer decoding</span> kind Kind <span class="token comment">// kind of value ahead</span> byteval <span class="token builtin">byte</span> <span class="token comment">// value of single byte in type tag</span> limited <span class="token builtin">bool</span> <span class="token comment">// true if input limit is in effect</span> <span class="token punctuation">}</span> |
Because the decode
works the same approach with encode
. I don’t go to the deep right now. But I have some points:
decode
use map that is cached in-mem- We have
decoder
function based on the length and the first element of the input
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | <span class="token keyword">func</span> <span class="token function">makeDecoder</span><span class="token punctuation">(</span>typ reflect<span class="token punctuation">.</span>Type<span class="token punctuation">,</span> tags rlpstruct<span class="token punctuation">.</span>Tags<span class="token punctuation">)</span> <span class="token punctuation">(</span>dec decoder<span class="token punctuation">,</span> err <span class="token builtin">error</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> kind <span class="token operator">:=</span> typ<span class="token punctuation">.</span><span class="token function">Kind</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">switch</span> <span class="token punctuation">{</span> <span class="token keyword">case</span> typ <span class="token operator">==</span> rawValueType<span class="token punctuation">:</span> <span class="token keyword">return</span> decodeRawValue<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ<span class="token punctuation">.</span><span class="token function">AssignableTo</span><span class="token punctuation">(</span>reflect<span class="token punctuation">.</span><span class="token function">PtrTo</span><span class="token punctuation">(</span>bigInt<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> decodeBigInt<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ<span class="token punctuation">.</span><span class="token function">AssignableTo</span><span class="token punctuation">(</span>bigInt<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> decodeBigIntNoPtr<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ <span class="token operator">==</span> reflect<span class="token punctuation">.</span><span class="token function">PtrTo</span><span class="token punctuation">(</span>u256Int<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> decodeU256<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> typ <span class="token operator">==</span> u256Int<span class="token punctuation">:</span> <span class="token keyword">return</span> decodeU256NoPtr<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Ptr<span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makePtrDecoder</span><span class="token punctuation">(</span>typ<span class="token punctuation">,</span> tags<span class="token punctuation">)</span> <span class="token keyword">case</span> reflect<span class="token punctuation">.</span><span class="token function">PtrTo</span><span class="token punctuation">(</span>typ<span class="token punctuation">)</span><span class="token punctuation">.</span><span class="token function">Implements</span><span class="token punctuation">(</span>decoderInterface<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> decodeDecoder<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> <span class="token function">isUint</span><span class="token punctuation">(</span>kind<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token keyword">return</span> decodeUint<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Bool<span class="token punctuation">:</span> <span class="token keyword">return</span> decodeBool<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>String<span class="token punctuation">:</span> <span class="token keyword">return</span> decodeString<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Slice <span class="token operator">||</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Array<span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makeListDecoder</span><span class="token punctuation">(</span>typ<span class="token punctuation">,</span> tags<span class="token punctuation">)</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Struct<span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token function">makeStructDecoder</span><span class="token punctuation">(</span>typ<span class="token punctuation">)</span> <span class="token keyword">case</span> kind <span class="token operator">==</span> reflect<span class="token punctuation">.</span>Interface<span class="token punctuation">:</span> <span class="token keyword">return</span> decodeInterface<span class="token punctuation">,</span> <span class="token boolean">nil</span> <span class="token keyword">default</span><span class="token punctuation">:</span> <span class="token keyword">return</span> <span class="token boolean">nil</span><span class="token punctuation">,</span> fmt<span class="token punctuation">.</span><span class="token function">Errorf</span><span class="token punctuation">(</span><span class="token string">"rlp: type %v is not RLP-serializable"</span><span class="token punctuation">,</span> typ<span class="token punctuation">)</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> |
Summary
RLP is an algorithm that is used in Ethereum to encode/decode arbitrary structured binary data. RLP is mentioned
on the Yellow Paper
but was very difficult to understand fully at first time.
Reference
- Go-ethereum written by Go
- Go-ethereum analysis
- Yellow Paper
- Documentation
- Medium
- Go-ethereum monorepo written by JS
Note
One of the first public articles I have written in English, so maybe this article has any mistakes about English or knowledge, feel free to tell me if you see any. Love you all.