Cover photo

Weeknotes: Data availability with XOR-based schemes

We review XOR-based schemes for Data Availability, and also a quick demo on how to set up Basin behind an HTTP server

Begin transmission…

Data Availability with Validity Proofs

by Avichal Pandey

Intro

Basin is still in its research phase, and we’re exploring various data availability approaches, including XOR-based (and related) schemes for erasure encoding. In this architecture, the validators create an auxiliary data structure with a dual purpose. It can be used to repair data in case of data loss or data withholding attacks and to ensure that the validators are not deleting the old data they are supposed to store.

One common consideration in DA systems is to make the “data storage” requirement optional. This would enable light nodes to validate that no data was lost without downloading all the data. This guarantee is typically provided using data availability sampling and fraud proofs. However, fraud proofs introduce a “challenge” window, which deters the system’s finality. Fraud proofs are also tricky to implement and test.

Validity Proofs for DA

An alternative to DAS and fraud proofs is to validate erasure encoding itself using SNARKs. This idea first surfaced in the Celestia forum last year.

If you created a way to generate a zk proof of correct erasure coding, then light nodes would not have to wait for a fraud proof window after sampling each block to consider it final. Second, we wouldn’t need bad encoding fraud proofs (BEFPs) which require the light node to download an entire row or column of the block data. Third, this unlocks the possibility of having consensus nodes vote on blocks without downloading all the data.

In this post, I want to build upon this idea to see how feasible it is to “verifiably” compute the XOR-based codes using the modern SNARK frameworks. For example, I want to answer the questions like:

  • How much time does it take to build a proof of encoding 1GB blob?

  • How big is the proof size?

  • How suited is a particular SNARK framework for this task

Background

XOR-based codes are essentially a sequence of XOR operations. In my first experiment, I am considering a 1GB object that gets broken into 1KB chunks. Bitwise operations such as XOR had been the Achilles heel of SNARKs for a long time. However, recently, this problem has been addressed using two different types of solutions:

The following is the crux of why bitwise operations are complicated in SNARKs. The arithmetic circuits used in SNARKs are defined over finite fields. In the case of elliptic curve-based SNARKs, these fields could be as large as 256 bits to make the protocol secure. Arithmetic operations such as additions and multiplications can be performed natively in the field while performing a bitwise operation requires simulating the operands and operations, which creates some overhead. The following image shows how converting an arithmetic circuit to a boolean circuit makes it bigger and wastes space by storing 1 bit in a data type used to represent a field element.

Source: Justin Thaler’s lecture

Lookup tables addressed this issue by precomputing all the results of bitwise operations and indexing them in a table that the prover can access. Another way to address this problem is to work with proving systems based on binary fields instead of prime fields (almost all SNARKs use prime fields). Binius is a pioneer in this area and is widely considered a paradigm shift in SNARKs.

This Binius project is at a very early stage in terms of implementation. As a reference, we have a few papers and talks. However, in terms of concrete implementation, we only have a non-production-ready POC.

The Binius approach is well suited to our case for the following reasons:

  • XOR-based erasure codes can be performed in binary fields without much overhead.

  • Unlike other elliptic curve-based SNARKs (Groth16, Plonk, etc.) and some commitment schemes like KZG, it doesn’t require a trusted setup.

  • Unlike other hash-based SNARKs (STARKs), bitwise operations don’t incur overhead.

  • Probably the fastest-known proving times.

  • Support of proof composition.

The Experiment

To test Binius, I have modified one of their examples to perform random XOR operations instead of AND operations. For this experiment, we are assuming that some N number of XOR operations is a substitute for actual encoding computation.

The following are observations from my local machine (a Mac M1 with ten cores and 32 GB memory).

Data

1Gb = 1024  1024  1024

chunk size

1Kb = 1024

chunks

1024 * 1024

field elements in 1 chunk

1Kb/size(u128) = 64

leaves (Field elements)

1024  1024  64

total operations (1Gb)

1024  1024  64

compute trace (field elements)

1024  1024  64

compute trace size (bits)

1024  1024  64 * 7

proof time

~60ms

proof size

~320 bytes

This experiment is just a first attempt at the problem. It only approximates the actual XOR-based encoding computation. The circuit might need to have more constraints to be secure. Let us assume a high enough error margin. For instance, let’s say generating a proof for actual encoding will take 10x more time. It will still finish in ~600ms!

These results are highly encouraging compared to my expectations. I thought generating a practical proof might take 10-15 minutes. My first naive implementation took 10 minutes for an object size of 125MB (1/4th the size of the object in the table). With proper batching of the computation trace, prover latency was dramatically reduced. Furthermore, this is just a first glance at the Binius repo. There might be more levers to pull to optimize proving time.

Concerns with Binius

  • it is extremely new. The Binius reference implementation is a POC not ready for production use. There is no documentation or dev chat to ask questions, and the only way to learn how to use it is by looking at existing examples.

  • it offers a low-level API for generating SNARK proofs. We must write the computation trace and specify the SNARK circuit by hand, constraint by constraint. We have to deal with polynomials. Writing SNARKs at this level of abstraction is slow and error-prone. It is analogous to using assembly language to write our programs.

Conclusion

The first issue should be resolved with time as the project matures. ZKVMs addresses the second issue.

Today, most RISC-V-based ZKVMs (RISC Zero, SP1, Jolt, Nexus, etc.) can take a high-level Rust program as input and produce a SNARK proof. Hiding all the complexity of generating computation trace, circuit constraints, and polynomial manipulation.

Unfortunately, as of today, most ZKVMs are based on STARKs not Binius. However, a16z’s Jolt has Binius support on the roadmap, and work on it has already started.

This leads me to conclude that using Jolt with Binius could enable validity proofs. Working with ZKVM also allows us to swap out the VM later. For instance, we could start with RISC Zero since it has great documentation and examples but swap it out for Jolt when it supports Binius.

Basin demo: Hosted HTTP server

by Dan Buchholz

We created a demo for how to use Basin with a "hosted" server setup:

  • A backend wallet sends object storage transactions on behalf of a requesting user.

  • You can /set a file—up to 100 MB in the demo (but the protocol itself has higher limits).

  • You can also /list all objects in the store.

Clone the repo and review the README instructions to get started!: textileio/demo-basin-server

From the client's perspective, curl requests are sent to either push files:

curl -X POST -H 'Content-Type: multipart/form-data' \
--form 'address=0x79447b8db3a9d23f7db75ae724ba450b7b8dd7b0' \
--form 'key=hello/test' \
--form '[email protected]' \
http://localhost:8081/set

// this will log the onchain transaction for storing the object
{
  "data": "bafy2bzacedxeu3g3uazqpn2ln7yvyfhc6ilj3vi5bf3h6usvygsxaub7paws4",
  "gas_used": 4311212,
  "hash": "1DDBED9D0398C4A7C0B2E0DE99BCE77C34232CC1AD45E9304F990A416ACAF830",
  "height": "956895",
  "status": "committed"
}

And also list the data in the store:

curl -X POST -H 'Content-Type: application/json' \
-d '{"prefix": "hello/", "limit": 10}' \
http://localhost:8081/list

// this logs data under the common prefix
{
  "common_prefixes": [],
  "objects": [
    {
      "key": "hello/world",
      "value": {
        "cid": "bafybeid3weurg3gvyoi7nisadzolomlvoxoppe2sesktnpvdve3256n5tq",
        "metadata": {},
        "resolved": true,
        "size": 5
      }
    }
  ]
}

The backend is written in Rust and makes use of the Basin SDK. Running the server will let you log information about incoming requests:

2024-07-20T17:49:27.589-04:00 - INFO Starting server at 127.0.0.1:8081
2024-07-20T17:50:12.015-04:00 - INFO {"body":"{\"multipart/form-data; boundary=------------------------u3Cayud8pzT4bsvlrHH4Z5\"}","route":"set"}
2024-07-20T17:50:14.691-04:00 - INFO {"client_addr":"127.0.0.1:50064","duration_ms":2676,"method":"POST","path":"/set","status":200}
2024-07-20T17:50:33.952-04:00 - INFO {"body":"{prefix: Some(\"hello/\"), delimiter: None, offset: None, limit: Some(10)}","route":"list"}
2024-07-20T17:50:34.371-04:00 - INFO {"client_addr":"127.0.0.1:50068","duration_ms":419,"method":"POST","path":"/list","status":200}

You could imagine adding other endpoints like getting the actual file at the object or deleting data, but hopefully, the demo server is enough to get you started!


Read & watch

  • Stark by hand: An in-depth tutorial on STARKs internals by tge RISC Zero team.

End transmission…

Want to dive deeper, ask questions, or just nerd out with us? Jump into our Telegram or Discord—including weekly research office hours or developer office hours. And if you’d like to discuss any of these topics in more detail, comment on the issue over in GitHub!

Are you enjoying Weeknotes? We’d love your feedback—if you fill out a quick survey, we’ll be sure to reach out directly with community initiatives in the future!: Fill out the form here

Textile Blog & Newsletter logo
Subscribe to Textile Blog & Newsletter and never miss a post.