Cover photo

Weeknotes: Decentralized data landscape, Basin intro, Rust server 101 with warp, & GenAI design patterns

Read up on our latest blog posts about the decentralized data landscape plus Basin's initial architecture, learn how to build a simple Rust server with warp + tokio, and GenAI for design patterns.

Decentralized Data Solutions: Insights and Opportunities

by Andrew Hill

Our recent blog post is essential reading for web3 builders, offering a thorough overview of the decentralized data landscape, complete with insights from a comprehensive survey of 32 developers and leaders. This piece highlights the opportunities and challenges in adopting decentralized infrastructure. It also calls out some key opportunities for all of us to improve data protocols in web3.

Basin architecture introduction

by Dan Buchholz

We published a blog about how Basin (new & improved) works and what makes its design unique. There's still a lot that we haven't fully dug into or implemented yet and are still researching, but this post should provide additional insights into the network.

The gist of it is that it's built on Filecoin's IPC subnet architecture for fast and verifiable data—specifically, object storage. If you want to learn more, the post talks about the general consensus mechanism, parent-child subnets/state, ABCI & FVM, and more!

Simple Rust API server with warp & tokio

by Dan Buchholz

We're adding a feature to the Basin CLI that will let nodes run a daemon / API server to auto-fund new accounts. The problem this helps address is around DX. If you're a new user coming from the EVM world to the FVM and the Basin (IPC) subnet, your EVM account is compatible with the FVM, but there's a bit of "registration" required onchain.

Filecoin has an Ethereum Account Manager (EAM) that lets EVM 0x addresses delegate to a f410 address, which lets the hex-prefixed address work natively on Filecoin. But, for the 0x address to work, it needs to be registered onchain with the EAM. This requires a transaction to be made.

In the coming weeks, we'll be adding a feature to the CLI that abstracts this process away. The core logic required here is a hosted/funded backend wallet that will send a small amount of FIL to a new account. Since Basin is only live of Filecoin Calibration, it doesn't actually "cost" anything for the backend wallet to fund new accounts. If you haven't had a chance to use the Basin CLI yet, get started while we implement it! Right now, you'll have to get your own tFIL testnet currency and deposit it into the Basin FVM subnet, and then you're good to go.

Setup

Below is a skeleton of the API server. It uses warp as the server framework, and tokio for async operations. There's a single /fund endpoint that takes an address as a parameter, so a requires like /fund/0x1234… will trigger an FVM and Basin subnet transaction to execute sending some tFIL to the specified address.

use std::convert::Infallible;

use dotenv::dotenv;
use serde::Serialize;
use serde_json::json;
use std::sync::Arc;
use tokio::sync::Mutex;
use warp::{http::StatusCode, Filter, Rejection, Reply};

#[tokio::main]
async fn main() {
    dotenv().ok();

    let state = Arc::new(Mutex::new(State::new()));

    let fund = warp::post()
        .and(warp::path!("fund" / String))
        .and(with_state(state.clone()))
        .and_then(handle_fund);

    let router = fund
        .with(
            warp::cors()
                .allow_any_origin()
                .allow_headers(vec!["Content-Type"])
                .allow_methods(vec!["POST"]),
        )
        .recover(handle_rejection);

    warp::serve(router).run(([127, 0, 0, 1], 8081)).await;
}

The handle_fund and handle_rejection are handlers that execute logic once the endpoint is hit. Aside from those, you can also pass some local state to the API—for example, perhaps you want to implement a simple rate-limiting feature. Passing/calling with_state lets you perform some operations to track some state every time the API is hit.

/// State for the API
struct State {
    // Rate limit data?
}

impl State {
    /// Create new state.
    pub fn new() -> Self {
        State {}
    }
}

/// Filter to pass the state to the request handlers.
fn with_state(
    state: Arc<Mutex<State>>,
) -> impl Filter<Extract = (Arc<Mutex<State>>,), Error = Infallible> + Clone {
    warp::any().map(move || state.clone())
}

The rejection handler is nice because it lets you control responses upon 404s, internal server errors, etc.:

/// Generic request error.
#[derive(Clone, Debug)]
struct BadRequest {
    message: String,
}

impl warp::reject::Reject for BadRequest {}

/// Custom error message with status code.
#[derive(Clone, Debug, Serialize)]
struct ErrorMessage {
    code: u16,
    message: String,
}

/// Rejection handler for the API.
async fn handle_rejection(err: Rejection) -> Result<impl Reply, Infallible> {
    let (code, message) = if err.is_not_found() {
        (StatusCode::NOT_FOUND, "Not Found".to_string())
    } else if let Some(e) = err.find::<BadRequest>() {
        let err = e.to_owned();
        (StatusCode::BAD_REQUEST, err.message)
    } else {
        (StatusCode::INTERNAL_SERVER_ERROR, format!("{:?}", err))
    };

    let reply = warp::reply::json(&ErrorMessage {
        code: code.as_u16(),
        message,
    });

    Ok(warp::reply::with_status(reply, code))
}

Lastly, this is a stubbed-out example of the actual handler for the endpoint. It's where you'd do some processing or make other function calls that handle the bulk of the logic, such as the onchain calls to the FVM for the "registration" process:

/// Handles the `/fund/<address>` request.
async fn handle_fund(
    address: String,
    state: Arc<Mutex<State>>,
) -> Result<impl warp::Reply, warp::Rejection> {
    // Do stuff

    let json = json!({"tx": "stuff"});
    Ok(warp::reply::json(&json))
}

by Jim Kosem

There are lots of questions about how AI will impact “knowledge” work moving forward as we all huddle in front of our machines for terminal velocity to be achieved to see if things calm down. One thing is for sure though in the meanwhile, we should think about how to use this stuff. There are no shortage of screeds of enthusiasm and self flagellating doomsaying, but one thing you don’t see often are patterns.

Design patterns are quite simply codified recommendations of use. You should design or build software this way or that. The interesting thing here is establishing how AI fits in terms of scope. The current approach we’ve seen thus far is that you type something into a box and AI does everything. This article examines it from a different and more nuanced standpoint, one that the staid and solid enterprise types like Microsoft have already known, which is you need to be clever where you slot it in and how much. You need to think which nail the hammer is going to pound down, rather than demolish a bed of nails with a ton of concrete. The author also takes this slightly further in exploring the notions of this fit being alongside or separate, layered or integrated. Perhaps most interesting as paragraph upon paragraph is written or perhaps even generated about provenance of produced words, images and sounds is the notion of awareness and action.

All in all, it’s a good, middle of the road piece of thinking about AI in the tools that people like us at Textile are building.


Read & watch

End transmission…

Want to dive deeper, ask questions, or just nerd out with us? Jump into our Telegram or Discord—including weekly research office hours or developer office hours. And if you’d like to discuss any of these topics in more detail, comment on the issue over in GitHub!

Are you enjoying Weeknotes? We’d love your feedback—if you fill out a quick survey, we’ll be sure to reach out directly with community initiatives in the future!: Fill out the form here

Textile Blog & Newsletter logo
Subscribe to Textile Blog & Newsletter and never miss a post.
#decentralize#data#basin#web3#onchain
  • Loading comments...