Weeknotes: Generating distributed random variables using cryptographic hashes, solving Ethers v6 + Hardhat nonce issues, & thinking about responsive design

Review geometric distributions for producing distributed randomness with hashes, how to use ethers v6 with Hardhat + nonce too low issues, and responsive design approaches for frontends.

Begin transmission…

Flipping bits (and coins) with hashes

by Carson Farmer

Simulating Geometric Distributions with 32-byte Hashes

In the realm of probabilistic data structures like skip lists, treaps, merkle search trees, and others, efficiently generating random numbers that follow specific distributions is crucial. One particularly interesting problem is simulating a geometric distribution—a model used to describe the number of trials needed to achieve the first success in a sequence of independent and identically distributed Bernoulli trials. At the same time, in distributed (and/or peer-to-peer) systems, we also end up doing a lot of hashing. And a good hash function should be pretty much indistinguishable from a random string of bits. So in this post, we’ll try to combine these two ideas, and explore an interesting method to generate geometrically distributed random variables using cryptographic hashes, by treated the hashes as a source of random independent coin flips! The outcome will be probabilistic data structures with pseudo random values that are (ideally) identical across peers in the peer-to-peer system — deterministic randomness!

The Basic Idea

A geometric distribution typically requires a success probability p to determine the distribution of trials until the first success. If you want to simulate a geometric distribution with a probability that isn’t 1/2, say something like p = 1 - (1/m), you can manipulate a sequence of fair Bernoulli trials (each with p = 1/2) to achieve this. The solution involves transforming a 32-byte hash, which is essentially a 256-bit binary string, into the desired geometric distribution. But how?

Understanding the Transformation

Each bit in the hash can be thought of as a result from a fair coin flip—either 0 or 1, each with a probability of 1/2. However, to simulate our target distribution, we need groups of bits where the group collectively has a success probability of 1/m. To find the number of bits (k) per group, we calculate the smallest k for which 2^k is greater than or equal to m. Mathematically, this is determined by:

Implementing the Simulation

Once k is determined, the hash is divided into groups of k bits. Each group is treated as a single trial with the desired success probability:

  1. Extract Groups: We iterate over the 256 bits in batches of k bits.

  2. Determine Success or Failure: A group is considered a "success" if all k bits are 0.

    1. This happens with a probability of:

  3. Count Trials: The count of groups until the first "success" is the result of our geometric distribution simulation.

Here's a practical function to achieve this using Rust:

#[cfg(test)] mod tests { use rand::Rng; fn simulate_geometric_random(bytes: [u8; 32], m: u32) -> u32 { // Compute ⌈log_2(m)⌉ let k = (m + 1).ilog2(); // Number of batches of k bits let batch_count = 256 / k; // Mask to extract k bits let mask = (1u8 << k) - 1; for i in 0..batch_count { let byte_index = (k * i) / 8; let bit_index = (k * i) % 8; let batch = (bytes[byte_index as usize] >> bit_index) & mask; // batch != 0 means we are looking for failure p = 1 / m // batch == 0 means we are looking for success p = 1 / m if batch == 0 { // +1 because geometric distribution starts at 1 return i + 1; } } batch_count + 1 // if no success found within the limit } #[test] fn test_calculate_rank() { let mut rng = rand::thread_rng(); let m = 4; let p = 1.0 - 1.0 / m as f64; let mut sum = 0; for _ in 0..1000 { let index = simulate_geometric_random(rng.gen(), m); sum += index; } println!("mean: {} expected: {}", sum as f64 / 1000.0, 1.0 / p); } } // running 1 test // mean: 1.333 expected: 1.3333333333333333 // test rank::tests::test_calculate_rank ... ok // test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.01s

The Takeaway

This method is efficient and interesting, converting a simple hash value into a sequence of geometrically distributed random trials. It underscores how foundational concepts in probability can be applied to solve practical problems in computer science, particularly in the design of data structures where randomness and probability management are important. Another bonus, treating our hashes (something we have to do anyway) as a random oracle makes this approach computationally elegant and requires minimal additional resources, making it ideal for environments where performance and memory efficiency are critical.

Troubleshooting Ethers v6 and Hardhat

by Dan Buchholz

We're close to launching a new Tableland SDK that includes Polygon Amoy support and also upgrades from ethers v5 to ethers v6. Most of the ethers changes shouldn't really break any of the behavior with the SDK, but it will result in a major dependency bump because it will no longer work with ethers v5.

The upgrade process led to a couple of challenges (and solutions!) with using ethers v6—particularly, when developing against a local Hardhat node:

  1. A perpetual failed to detect network error gets logged every second, even though the network exists (see here for more details). For the most part, this is more annoying than actually preventing the SDK from working.

  2. A nonce too low or expired error, indicating that a transaction has been sent/accepted with the same nonce as one that's already been used. This is a huge issue because it became impossible to run our tests against a Local Tableland network, which runs a Hardhat node. Tests flakily fail due to a nonce issue.

For problem (1), it turned out to be an easy fix. The failed to detect network error would get logged every time the getSigner() method was called on a provider. Before making that call, all that's needed is to _detectNetwork:

// Instantiate a `provider`—e.g., via `globalThis.ethereum` or an instance of `Eip1193Provider` import { BrowserProvider } from "ethers"; const browserProvider = new BrowserProvider(provider); await browserProvider._detectNetwork(); // this stops the annoying logging const signer = await browserProvider.getSigner();

For problem (2), this took forever to figure out. I tried:

  • Always pass an ethers signer wrapped in an ethers NonceManager to the Tableland Database class.

  • Make sure all calls in a test are awaited before proceeding to the next test (to ensure that txs weren't being submitted too fast to the Hardhat node).

  • In the SDK's Database, Registry, and exec(), take the passed signer and force convert it to a NonceManager.

  • Try getting the nonce (via signer.getNonce()) before a call is made and manually pass it in tx calls.

  • Setting custom JsonRpcProvider options (note: this was the ~right solution).

  • In the before/after mocha test hooks, make calls to the Hardhat node and hardcode/reset the nonce with hardhat_setNonce (here)...but, for obvious reasons, this didn't work.

It was a much simpler fix, as first described here. An ethers JsonRpcProvider can take several options, and setting the batchStallTime and cacheTimeout are required to avoid the nonce issues. Also, since in this case, we're always connecting to the same network (a Hardhat node), the staticNetwork can reduce some calls, too, when coupled with the network param.

import { Network, Wallet } from "ethers"; const account = "your_private_key" const wallet = new Wallet(account); const chainName = "local-tableland"; // i.e., a hardhat node const chainId = 31337; const network = new Network(chainName, chainId); // Optional const signer = wallet.connect( new JsonRpcProvider(`http://127.0.0.1:8545`, network, { staticNetwork: true, // Reduce `eth_chainId` calls batchStallTime: 0, // Don't batch requests, send them immediately cacheTimeout: -1, // Don't cache }) );

Responsive enough

by Jim Kosem

We’re in the midst of designing and prototyping a number of sites, showing what we’re working on at Textile, which I’ve been writing about or around in the past couple of weeks or so. At first, we talked about just putting a website up in Framer which should be good enough for now. Then I realized that you don’t quite get responsive behavior out of the box.

The long-standing issue with designing visuals, and well most things with screens, is that you sort of only half know what it will look like for the person that might stumble upon it. Color issues notwithstanding, there is no way to know what sort of device or screen size someone will use. So, just thinking about 65.49% of global website traffic being on mobile devices makes us think this is something we need to get underway. It just needs to work at this point, but fortunately, at this point, responsive design is quite established. Framer itself has some tools to help with this, but the interesting thing from a design standpoint is how to let the design degrade and how to decide what to cut. What is important to see or read when you’re on a phone? Just how much stuff should we be putting on a page at this point then? The issues are innumerable but part of the process of being loose in the right places with how to design for what is a constantly shifting target.


Other updates this week

Data Economy hack winners!

Congrats to all of the team that participated in the Filecoin Data Economy hackathon. Here were the top 3 projects:

  1. DBNS by @LionisNick—a unified namespace to host and collaborate datasets and AI models: here

  2. DeRAG by @debuggingfuture—a DataDAO-owned LLM AI with trustless, decentralized augmentation: here

  3. ChainQuery by @0x_Clint—web3 Q&A platform with token-based rewards and AI augmented answers: here

Decentralized Intelligence hack winners!

The DI hack had 7 total winners—3 top projects, and 4 honorable mentions:

  • Arbitrarian—simplifying smart contract learning and ERC20 token creation: here

  • Memester—merging meme creation with blockchain for rewards: here

  • Spooky—storytelling with AI and blockchain, minting narratives as NFTs: here

  • DeFi Pets—interactive DeFi learning through AI-powered NFT pets: here

  • dChat—secure, decentralized communication via blockchain: here

  • TokenMart—revolutionizing e-commerce with tokenized rewards: here

  • Crypto Subscriptions—managing crypto subscriptions with ease: here

End transmission…

Want to dive deeper, ask questions, or just nerd out with us? Jump into our Telegram or Discord—including weekly research office hours or developer office hours. And if you’d like to discuss any of these topics in more detail, comment on the issue over in GitHub!

Are you enjoying Weeknotes? We’d love your feedback—if you fill out a quick survey, we’ll be sure to reach out directly with community initiatives in the future!: Fill out the form here

Textile Blog & Newsletter logo
Subscribe to Textile Blog & Newsletter and never miss a post.