Cover photo

Weeknotes: Base Sepolia support, new query placeholders, Rust-isms, & AI learnings

New Tableland query placeholders features & Base chain support, Rust Options with references plus wasm, and AI spotlight/learnings

Begin transmission…

Tableland support for placeholders and query params for /query endpoint

by Bruno Calza

A new version of Tableland was released with support for placeholders and query params for /query endpoint (both GET and POST).

Here are some examples:

  • GET /query

https://tableland.network/api/v1/query?statement=select * from _137_266 where ksuid = ? and year = ?&params='26G4V9VLoG9wVTeBE1M4UhJzOPi'&params=2011
# url-encoded version
https://tableland.network/api/v1/query?statement=select%20*%20from%20_137_266%20where%20ksuid%20=%20?%20and%20year%20=%20?&params=%2726G4V9VLoG9wVTeBE1M4UhJzOPi%27&params=2011
  • POST /query:

curl -s -X POST "https://tableland.network/api/v1/query" \
--data '{
  "statement": "select * from _137_266 where ksuid = ? and year = ?",
  "params": ["26G4V9VLoG9wVTeBE1M4UhJzOPi",2011]
}'

With that change, it becomes a lot easier to programmatically build your query requests and send it to the Tableland Gateway. You don't need to worry about query building!

Tableland Base Sepolia chain support

by Dan Buchholz

Tableland is adding Base Sepolia chain support! We've already updated the validator and the @tableland/evm repo, so you can build on Base today with smart contacts. The downstream clients (JS SDK and Studio) will be incorporating Base support later this week.

Note this is only for the Base testnet, so mainnet is not something available, yet. Mainnet will be added based on developer demand…so let us know if it's something you're looking for on Tableland!

Option<&T> vs. &Option<T> in Rust

by Avichal Pandey

How do you decide whether to use Option<&T> or &Option<T> as function arguments or returns types in Rust? For instance, should you choose the first or second version in the code shown below for public functions?

pub fn data_a(&self) -> &Option<Data> {
	&self.0
}

pub fn data_b(&self) -> Option<&Data> {
	self.0.as_ref()
}

The choice of one over the others will impact the code base's overall performance, flexibility, and maintainability.

Here is a great explanation https://youtu.be/6c7pZYP_iIE

Experimenting with Rust & wasm compilation

by Dan Buchholz

We're working on getting the Basin SDK to compile to wasm, which makes it easier to build downstream JavaScript or Python clients that are constantly in sync with the base Rust crate. The most common approach is wasm-bindgen plus wasm-pack. With wasm-bindgen, you add macros to the parts of your library that you want to be able to compile to wasm:

use wasm_bindgen::prelude::*; 

#[wasm_bindgen]
pub struct Data {
	inner: u8,
}

And then you build it by specifying your crate and the target architecture:

wasm-pack build --target web my_crate

You also might have directives that use different features for architecture-specific dependencies:

[target.'cfg(target_arch = "wasm32")'.dependencies]
my_dependency = "0.0.1"

The Rust cookbook has a ton of information on how to set this all up, too: here. For example, if a library is doing any I/O, you'll have to set up additional wasm support with js-sys or web-sys, which let you implement features that NodeJS or a web browser supports for file access. If there are any async operations, the wasm-bindgen-futures crate is also required for converting JS Promises to Rust Futures.

AI Spotlight: Lilypad

by Marla Natoli

Lilypad is creating a distributed compute network that focuses on enabling AI use cases such as AI and ML. They enable the contribution of idle compute power from your own hardware via their established and verifiable processes powered by smart contracts. Their focus is on high-throughput data processing to power models that require high-performance computing. Lilypad’s decentralized compute network can significantly reduce the cost and time associated with generating large datasets.

With the proliferation of AI services and infrastructure, there are a few key recurring themes we’re hearing about from teams working in AI - the need for compute power, the importance of internet-scale processing, and the value of verifiable data pipelines. At Basin, we’re looking to support a decentralized, high-throughput compute workflow with our decentralized object storage solution running on horizontally scaleable subnets (powered by IPC). This makes key data available quickly, enabling compute over that data from various stakeholders and facilitating collaboration over datasets, which is becoming more and more important to facilitate AI training.

If you’re working in AI and interested in decentralizing database infrastructure to bring collaboration and verifiability to your data, we’d love to hear from you. Read more about Basin here and apply for our private beta here.

What we still have up on the machines

by Jim Kosem

In What can LLMs never do?, Rohit Krishnan exposes some fascinating gaps in the AI takeover of everything.

There is no shortage of polar-swinging discussions around LLMs and “AI” in general. It is either going to save humanity and single-handedly make a better cocktail than any human ever could, say the cheerleaders, who often have a monetary stake in this “force” (because what else is it really at this point, nebulous...). On the other side of the debate globe are the fearful and huddled masses seeing nothing but doom and gloom.

This is a large camp that is quite easy to join because humans are easy to frighten, and when there is lots of money to be made, it’s easy to get screwed. There is hope for the masses that there are things the LLMs just can’t figure out which most three to four-year-old humans can, such as statements of equality (this is that and that is this) and interestingly, things involving words and grids. Who would have knew. The LLMs sure didn’t.


Other updates this week

  • ETHcc is about a month away! We’ll have a couple of folks in attendance, so keep an eye out for further announcements and when/where we’ll be. And if there are any events you’d like to collaborate on, let us know on Discord!

End transmission…

Want to dive deeper, ask questions, or just nerd out with us? Jump into our Telegram or Discord—including weekly research office hours or developer office hours. And if you’d like to discuss any of these topics in more detail, comment on the issue over in GitHub!

Are you enjoying Weeknotes? We’d love your feedback—if you fill out a quick survey, we’ll be sure to reach out directly with community initiatives in the future!: Fill out the form here

Textile Blog & Newsletter logo
Subscribe to Textile Blog & Newsletter and never miss a post.