💻

Beanstalk Dev Call #4

Date
February 9, 2023
Timestamps
0:00 Intro • 0:43 Purpose of Indexing Software • 2:59 The Graph • 5:01 Centralized Alternative • 8:59 Considering Future Developments • 17:21 Decentralization Profile of The Graph • 23:39 What Is Sufficiently Decentralized? • 26:23 Current UI Implementation • 32:29 Development With Subgraphs • 42:07 Custom Solution • 44:30 Review of Indexer Options • 1:05:33 Suitability of Subgraphs • 1:26:42 Requirements of Custom Indexer • 1:35:08 Next Steps • 1:42:02 Censorship Resistance
Type
Dev Call

Recordings

Meeting Notes

Conclusions

  • The community seems to be leaning towards a custom solution. More exploration and research will need to be done, but that does seem to be the way forward.
  • In the meantime, a more limited subgraph implementation could be helpful to buy enough time to get the indexer solution right.
  • There could be a more short-term product created that is a proof of concept for the custom indexer.
  • Team code review of Tractor will be necessary to have some sort of interface lock in order to start making indexing decisions. Some of this be done in parallel so that the contracts functionality can be adapted to accommodate the needs of the indexer if necessary.

Need For Indexing/Overview

  • One of the fundamental principles of building on a censorship resistant immutable chain is that you want to have the on-chain data remain as light as possible. To achieve that, it requires a lot of indexing of the on-chain data in order to process it off-chain. In order to understand or interpret potentially very limited data encoded on-chain, each system of smart contracts typically requires its own indexing software.
  • It is important to consider how the indexing layer factors into the censorship resistance of the systems being created.
  • The Graph is a network to host indexing software in a decentralized fashion. There are multiple sub-graphs which are indexers for Beanstalk and the Bean token that run on the Graph network and are hosted in a decentralized fashion now. The bean.money website uses both of those sub-graphs and the website is currently hosted on a centralized server.
  • The centralization vector associated with using Beanstalk has been minimized as much as possible through the open sourcing of the website code, and through the deployment of the sub-graphs on the Graph network.
  • The hosting of the website makes it such that there's still some need to interact through some centralized intermediary. In theory, since the website is open source, you can run it all locally and having the data indexed on the sub-graph makes doing so much easier. But from a centralization perspective, the vast majority of users are interacting with Beanstalk through a centralized host.
  • The question that must be asked is until there is some sort of decentralized hosting network or way to host websites in a decentralized fashion with a competitive user experience to centralized hosting options, what is the value of the labor intensive work of developing subgraphs. Is it a better idea too instead develop centralized indexing software that runs on a centralized server and open source that code such that anyone can run their own copy.
  • Subgraphs have the limitation that they can't index multiple chains, while Tractor orders can be placed on any compatible chain.
  • If we develop our own decentralized indexing software, it would have similar centralization vectors as is currently the case with the website.
  • There is a separate question of what is the actual indexing work that needs to happen.
  • Eventually there may be off-chain zero knowledge solutions capable of indexing Tractor's entire order book.
  • Whatever tech is chosen will entail a commitment to using that tech for a substantial amount of time. It's important to choose the best solution that will minimize the amount of code that needs to be replicated.

The Graph

  • The Graph has its own network with independently operating nodes, similar to the architecture of Ethereum.
  • Indexers for The Graph stake some tokens and execute the instructions to do the indexing. After they submit proofs, they are subject to slashing risk if their work is incorrect.
  • The Graph implements a very specific type of indexer which imposes engineering constraints that can affect the products that can be built. It may or may not be the case that The Graph supports everything that we need long term.
  • If The Graph doesn't support something we need to do, at that point we could consider forking it to provide some upgrades or starting from scratch to build something optimized for Tractor and Wells.
  • The Graph offers a centralized host, which is an instance of their software running on a cloud provider. The software is open source, but too demanding for most people to run themselves. Other pieces of the infrastructure, such as the UI, can be run on most modern computers.
  • Ultimately if we can make it easy to run the different components, that is a de facto form of decentralization. Even if it is not on a decentralized host, if it can be run by users themselves that produces a similar result.
  • It is not currently possible to have a subgraph run on a locally forked version of Ethereum for development and testing purposes due to the volume of queries required to index all of Beanstalk over its history.

UI Decentralization

  • Decentralization is a matter of the least common denominator, such that until there is a way to sufficiently decentralize the UI, the value of decentralizing the indexer is marginal. Open sourcing the indexer is incredibly important, but as long as Beanstalk Farms is hosting the UI, there might not be much benefit to having the indexer deployed on a decentralized system as opposed to running it locally.
  • The UI has been designed to be runnable as a static website so it can be hosted somewhere like IPFS.
  • There needs to be some discussion about how users can be sure they're using safe deployments of the Beanstalk UI and not a malicious forked version.
  • The UI accesses most of the core data directly from the blockchain. It uses the sub-graph in some cases to make it cheaper to access certain sets of data where it is too complex to parse and process it in the UI. But for things like Silo Deposits, it pulls it straight from the chain.
  • Designing the UI to access the data directly wherever possible means that all you need is an Ethereum node and a way to run the UI and you can use it. It is much slower to load data this way and can't be used for something like the Pod Marketplace, even for a single user, much less all orders and listing that have existed. But it is possible to use many of the core functions.

Decentralized Compute

  • There are a couple other options that are in the early development stages and aren't yet complete and tested, such as Substreams or Axiom (zero-knowledge protocol).
  • Using Substreams is unattractive because it will require a developer that can work with Rust and they also seem to have little interest in supporting additional chains. It's going to be a longer and more involved process to use this and likely require consultation with their team. This is unlikely to be the solution we are looking for.
  • It is very unclear what he capabilities of Axiom will be, although the zero-knowledge proof technology itself is promising. It will take major investment just to understand the technology and even start building on it.

Custom Indexing Solution

  • A custom solution would be able to parse data from different blockchains and index them sequentially. It may store the data in a postgres or other database. It would implement custom logic to handle various mappings to track orders and things like that.
  • Using a custom solution would be the most flexible, allowing us to start in whatever capacity is desired with a goal of making it more lightweight over time. This would require some research from somebody familiar with this technology doing a deep dive to determine how to build it efficiently. However, it would be easy to spin up a very basic version and development could prioritize what is needed by the products being built.
  • Ultimately, whatever is built should be in the context of decentralized verifiable compute. It might be possible to start with a pared down version of a local custom solution to see if it could be built with Axiom.
  • It is probably worth doing a deeper dive on the Axiom code base and see how close they are to being ready.
  • If Axiom isn't viable, it probably makes sense to go with some other generalized zero-knowledge proof with fully custom JavaScript.

Transcript

On it. Good. Cool. So as I understand it, as previously described in the meeting a few few minutes ago, I think it would be helpful to have a maybe to start high level discussion around all the off chain work to be done surrounding walls and tractor with regards to middleware work like SDK indexing, software, user interfaces, etc. but public, if you want to maybe set the stage a little bit better than that. So there is a, at the end of the day, a fundamental one of the fundamental pieces of building on a sensitive, resistant, immutable chain is that you want to have the data that is kept on the chain be as late as possible. And with that in mind, there's a lot of indexing of whatever the on chain data is that is ultimately required in order to understand or interpret or know what to do with the potentially very limited data that is encoded received on chain and from that perspective, each smart contract or a series of smart contracts typically requires its own indexing software. And there's something to be said for just talking a little bit about what some of the indexing software requirements will be for some of the tech that has now been discussed as being built, including wells and some of the betting markets are some of the lending pools, etc.. But also from a philosophical perspective, it's also important to consider what are some of the requirements of indexing software to facilitate censorship resistance and in particular, given that the goal of deploying for the primary reason to deploy these smart contract is censorship resistance or one of certainly one of the primary reasons that censorship resistance, it's important to consider all that index and layer factors into the the censorship resistance of the things that are being created and at the moment, maybe just want to talk a little bit about that and where things are at currently. So the graph is a network to host it, indexing software in a decentralized fashion. So there are multiple sub graphs which are indexers for Beanstalk and the bean token that run on the graph network and are hosted in a decentralized fashion. Now the website the Bean Down Money Website uses both of those sub graphs and the website is currently hosted on a centralized server somewhere. I think it's net. I don't, don't, don't know exactly at the moment, but the point is that the centralization vector associated with using beanstalk has been minimized as much as possible through the open sourcing of the the front end code, the website code, and through the deployment of the sub graphs on the decentralized graph network. Now, if you really think about this from a from a least common denominator perspective, the hosting of the website makes it such that there's still some need to interact through some centralized intermediate intermediary. Now, the web open source theory, you can run it all locally. And having the data indexed on the the sub graph makes makes doing so much easier. And it also makes running the centralized hosted website easier on clients. But the point is that from a centralization perspective, the vast majority of users that want to just go to a website are interacting with bienstock through a centralized host. And so there's a question that must be asked, which is until there is some decentralized host or hosting network or way to host websites in a decentralized fashion with competitive user experiences to centralized hosting options or or or an even semi competitive hosting environment, What is the value of building decentralized indexing software? And in particular, the the graph and developing sub graphs is pretty tedious work. Is there value in putting in a lot of work going forward in developing sub graphs or instead developing centralized indexing software that runs on a server? It's a centralized server and open source in that code as well, such that similar to how currently some entity in this case Bienstock Farms, hosts the front end on a centralized server. If the indexing software is just open source, there's no reason why you can't reach a similar level of centralization already without going through all of the extra work associated with sub graph development. So on the one hand, this is this is maybe the the point of this comment or one of the points is comment at the moment. With all of this stuff being built in the ecosystem, there's a question, an open question as to how well all of this stuff be indexed and should should work commence on building sub graphs for all this stuff. Given the nature of where we expect exchanges to take place, meaning after and wells and the fact that tractor orders can be placed on any even compatible chain and so perhaps can't index multiple chains and we don't expect them to be able to for the foreseeable future based on our conversations with the graph team. What is the the plan going forward? Does the community need to develop our own decentralized indexing software? Is it instead acceptable to have similar centralization vectors as is currently the case with the website and commit to having some sort of open source indexing software for everything, But it doesn't need to run in a decentralized setting or, you know, those are probably the two options going forward and it's a discussion worth having as a community because ultimately we're going to have to move forward collectively one way or another. But that's probably the starting point for the conversation. And again, then there's a separate question as to what what is the actual indexing work that needs to happen. And I think that may also be an interesting conversation to be had. But there's there's the macro question as to the right way to build the indexing software. Given the goal of censorship resistance and the current state of of the state of the space. And then the more micro question of which are the which are the pieces of software that currently need to be indexed and are we going to get get them indexed. So a little bit of a long winded way to kick us off, but that's where things are at. So maybe just to jump in here and add a little more to that, you know, kind of it's important to kind of also take the scope of, you know, kind of maybe where things are, where things could be in their idealized state when kind of discussing how these projects will, you know, kind of weave in together and evolve over years as the centralized storage and the centralized compute become cheaper and cheaper. You know, ultimately the difference between tractor and well is that with a well, the proof of the liquidity is measurable inside of the event, whereas on tractor it's not and when you think about what really is at the core, what are they? They are both orders to exchange some asset or assets for another set of assets or assets. And what it needs to be used for is completely different kind of going from in its most minimal state, its, you know, just metadata containing the order details like something like tractor and its most expansive state. Its a fully query able database system that allows people to search for specific orders and it performs all of the indexing, all of the computation under the hood, and you know, thus that, you know, the UI interacts directly with the actual blockchain, kind of something similar to what something like Uniswap V2 is able to do by making the nature of the order to be one single order. And thus, you know, because everyone is using the same order, it doesn't have to query a very sufficient, you know, complex indexing system as there's only one order to query between, you know, beans in UCC. However, you know, as you know, technology scales on chain, you know, and nears more fully idealized state think kind of you know an app chain that contains you know kind of everything wrapped as one where with a flawless, you know, security model that's properly decentralized where all validators are both, you know, performing some sort of decentralized storage in the centralized compute on top of that storage, allowing essentially everything to be carrier within that environment. It however, you know, it's still unclear what the technological limitations actually are. As you know, this technology continues to scale and kind of, you know, even one thing that, you know, recently has come to our attention is this new protocol called Axiom. And Axiom provides kind of two services through zero or it provides access to two sort of data points through zero knowledge proofs. It's able to provide trustable outputs by performing some base level of computation on historical Etherium data. So, for example, instead of having to use an on chain pump, Bienstock could use some off chain zero knowledge formula that or some off chain statistical formula that is a function of past etherium blocks and even specific storage slots in there's a theory in blocks applies some verifiable, verifiable compute that can be verified with a proof, and the proof is generated through zero knowledge. So it's, you know, succinct or stark technology. So kind of from that context, it's like, you know, even this completely, you know, it makes the wells interact their converge at that suddenly you know a sufficiently complex off chain zero knowledge solution is capable of indexing tractor and it's not just capable of indexing track there for the solution such as oracles. It's capable of indexing an entire order book. And now that you know, you can have this provable order book and, you know, it's, you know, perhaps the, you know, zero knowledge technology continues to grow and soon if you're in virtual machine is executing inside of a zero knowledge execution layer and you know one can start to see how it very soon kind of could potentially converge into kind of a single thing. But, you know, kind of it's completely unclear how technology will continue to evolve over the next couple of years. And kind of really the point being that Bienstock Farms, by committing to, you know, kind of the beanstalk community, by committing to building two pieces, certain pieces of tech are making a commitment to using that solution for a substantial amount of time as any sort of product would take, you know, a significant amount of effort to build and, you know, really need to make sure that the solution that ultimately chosen is the best and, you know, is the best in the context of, you know, we want to minimize the amount of code that needs to be replicated. So, you know, kind of just just without that on, I, I personally don't have too much to add on to that specific point, other than just to highlight that. I think, you know, we may want to bifurcate a couple of things here. The first is to what extent in the short term should beanstalk infrastructure be decentralized? And that includes the deployment and hosting of the UI, as well as the components that are necessary to make the UI run, which are, you know, an RPG for an end to some graph. The second is what is the scope of indexing software in the short to medium term in order to get initial versions of something like tractor off the ground. And you know, that might tend to look a little bit more like something that fits with existing technology. For example, using a sub craft to index tractor orders, building our own lightweight indexer to do this, that sort of stuff. I think largely I think about this in terms of how can we build a prototype that helps inform other engineering decisions. Which leads me to the third thing, which is thinking, you know, very long term, many, many years down the road, perhaps not in terms of development, but just in terms of longevity for the product. What like what does the final form look like? And I think what Google has just described is really important, but also highlights the fact that the process of of doing it, you know, figuring this problem out exhaustively is probably a pretty significant research project, which we may want to allocate, allocate some resources to. I don't think at least in my in my knowledge, there isn't like off the top of the head an easy answer here, and I think it's likely going to involve investigating, you know, a number of the different products you mentioned Publius as well as potentially like, yeah, I know we've talked in in other calls about the merits of, you know, having some sort of app chain in which you own the execution layer and that sort of exploration. So all those are on the table. And I think likely the effort will involve, you know, first deciding short term versus sort of longer term goals. And then with respect to the longer term goals, spending some time mapping out areas of exploration like the ones you mentioned. This is maybe a more naive question, but could one of you guys describe the ways in which something like the Graph Network is less censorship resistant than the theorem, but more censorship resistant than someone hosting a a centralized database? And maybe what some of the limitations are of their implementation. Yeah, I'll take with and or whether my assumption at the beginning of the question was correct. Yeah. So I mean there's a couple different ways to to slice this, you know, as it stands in the same sense that I guess if there embed miners previously now in stakers or nodes I suppose so does the graph network for their decentralized network implementation. So I, I'd have to think more about like where on the margin that's like those two are different. But I think like to me the graph network I think you know to me makes sense from a decentralization perspective and that it's, you know, sufficiently decentralized compared to Ethereum that however, I think the one thing that's worth noting is that the graph implements like a very specific type of indexer, right. Which has its own engineering constraints, which can affect the products that can be built with it. And I think that it may not be the case that the graph supports everything we need, for example, in order to in order to be able to scale for the long term. And so that's at that point, you know, it might be worth looking into forking it to provide some upgrades or potentially starting from scratch to build something that's optimized for for what tractor and wells are trying to do. And then, you know, from there to consider like to what degree is decentralized zation baked in. I think maybe to speak a little bit to what would be a little bit more on the centralized side of how those things get used. So the Graph network historically has offered what they call the centralized host, which is basically just a, you know, an instance of their software running on some cloud provider. The software under the hood is open source and anybody can spin it up and and run an indexer. However, it is pretty expensive and and time consuming to do the beanstalk stuff. Graph, for example is like a little bit too hefty to, you know, run on your own computer, for example, for most people. So that is a limitation that's worth taking into account. But other pieces of the infrastructure like, you know, running the UI, for example, are doable from, you know, any most, most modern computers, for example. So you can run it yourself or you can host it on your own box. A lot of options there. So it's interesting to think about, you know, when we talk about decentralization, particularly with respect to the graph, there's this, you know, the decentralized host tries to to solve that problem. But ultimately, if we can also make it easy to run the different components yourself, I think that in itself is a form of decentralization in the sense that, you know, even if there isn't some decentralized net deployment for a particular subgroup, if users know how to spin it up themselves and maybe lots of people spin it up on their own machine, start it across the cloud or or locally, that in effect is is sort of producing a similar similar result. So yeah, hopefully that answers your question. So another naive follow up question. So on the on the graphs like decentralized network, where is the the code actually like the code that's executing this indexing, where is it? Where is it being executed? Is that what you're referring to, like the stakers in their network? So, you know, you know, not definitely not an expert on the fine details, but, you know, to my best understanding, you know what a sub graph is? It is a piece of work deployed on chain to the Ethereum network and Indexers, you know, kind of index a sub graph which is a set of instructions deployed on chain as a smart contract on previous historical Ethereum data and the graph has some way to generate some, you know, some proof of index which represent which is some proof that one, you know, generates through performing this kind of, you know, by executing this sub graph on the state and it's optimistically accepted, you know, through some sort of optimistic system where they report it. And then if it's wrong, someone you know, you know, submit some sort of fraud proof. And in that case, you know, they'll be deemed malicious. They stake some sort of amount of graph tokens and presumably those graph tokens get slashed some amount if they provide an incorrect proof of index economically, you know, kind of there anyone can take graph tokens and stake them against a sub graph which allocates some proportion of JRT tokens to each sub graph proportionately as some sort of like gauge system. And then also the indexers can charge a query fee which also gets distributed to indexes for submitting the the proof of index. That's helpful. Thanks for thanks for walking through. That didn't mean to derail in any way. So I guess when you guys think about what it means for any of this indexing software that whether it's we build or, you know, whatever the software that ends up supporting something tractor in wells like what is sufficiently decentralized. I mean, to you guys I think to publish this point earlier, you know, it's a least common denominator thing until there's a way to sufficiently decentralized the UI, the the marginal increase in value of decentralizing the indexer is incredibly small open sourcing the indexer is obviously incredibly important. But whether you know it being stock farms is hosting some UI as it is now, is there any benefit to have to also having them host the indexer or you're having the index are deployed on some sort of decentralized system as opposed to then running it locally. You know, it's hard to know what will eventually decentralize the UI and at what point that makes sense. No, there was like that, you know, some kind of app, a, you know, kind of application library where you could kind of add a UI to that, you know, So maybe perhaps some sort of, you know, app store that can, you know, access the apps live locally and, you know, users are essentially running their own front end. You know, if that if that became the commonplace, then, you know, having the index run a decentralized net is definitely a step up as now it's a fully decentralized and flow with the centralized blockchain, decentralized indexer and, you know, kind of running their own UI locally. But you know, kind of until that point, you know, you know, it's unclear how much of it or it's unclear what the marginal benefit to creating a product that you know or by moving it by using the decentralized deployment, how much it helps. So think in the short term, you know, it really boils down to something Chad kind of brought up, which is, you know, is development done on a sub graph where it could be deployed to the decentralized network at any time, and it's being built in, you know, provable computation. You know, kind of units and or, you know, does it make sense to start creating some, you know, open source, simple but custom lightweight indexing solution that anyone else can run locally, but, you know, that could eventually be deployed to some kind of decentralized web hosting, but not necessarily. But but as of now, it probably couldn't be deployed to some sort of decentralized hosting. I would I would briefly highlight that in terms of the on the UI front itself, referring to the actual like, you know, React code that is deployed that renders all the buttons and that kind of stuff. There's two components to decentralization to highlight. So the first is that we do deploy our main UI to now the fly, which is the one that you see when you go to APT up and get money. But there are ways to deploy, for example, to Ipfs or other file store sorts of hosts. So we've designed the UI intentionally to be run both as a static website and so anywhere that can host static files, which includes Ipfs and or even that sort of thing, can host the Beanstalk UI. And in fact there actually is already a mirror that's deployed to using a tool called fleek. I think it's fully got SEO which deploys to Ipfs. And so that's the first thing. There's, there's some questions there around. Like once you do that process, how do you inform users of like where safe deployments of the beanstalk UI are? Obviously since it's open source, anybody could fork it, deploy it, tell you that it's a real one and actually it's stealing all of your beans. And so that's something that we should probably discuss more. But I think one possible solution might be, for example, you know, some sort of NSA based routing in which, you know, there's a provable on chain, you know, I guess DNS route, the equivalent of DNS on NSA, which routes from units to IPFS to render that. So that's maybe a separate rabbit hole to go down. But just to highlight that, I think the second thing is that so there's the indexing software that Beanstalk currently runs that which is predominantly the sub graph is used to in some cases make it cheaper to access certain sets of data. And in some cases, you know, there is it's too complex to parse and process the data in the UI itself and it needs to happen at some other layer. However, most of the core data that beanstalk, you know, things like silo deposits and, and pods in the field and that sort of thing, those sorts of core components are without too much trouble readable. Typically in the UI. So for example, as it stands, the when you go to the, the you and you load your silo deposits, it's actually not using the sub graph. It's calling the chain directly. And we've left this as a this is a deliberate decision to build it this way. And it's something that we'll continue to to try to enable where we can because being able to derive the full UI just from, or at least most of the UI from the only information that's stored on chain means that the only the only thing you need is an east node and a way to run the UI and your golden. There's a lot of challenges that arise from this. For example, it's, you know, for a lot of reasons, much slower to load data and that in that manner with respect to the pod market, it's, you know, prohibitively prohibitively complex to index the entire pod market in this way, even for a given individual user, much less for all orders and listings that have existed in the pod market. So for that reason, those components tend to lean on the sub graph for their data. But I bring this all up to say when it comes to some of the core pieces of beanstalk, like seeing your silo deposits in pods, you know, that can be done with only an east node and some well architected from a to that end, it does sort of I mean, to me it sort of feels like that, you know, the UI and indexing layer have different weightings and this least common denominator problem that that progress describes and that personally it feels like the activation energy required to run your own sub graph as well would be much higher than running your own UI, in which case I feel like it's somewhat less true that further investments in censorship resistant resistance at the index locking layer all are the value in doing that is not lessened by the fact that you guys are so centralized. But what do you guys think? Or anyone for that matter? I would ask people who have tried to kind of run a sub graph node locally, how difficult is that with the Bienstock model repo? Does it feel like it's as easy as splitting up a node and deploying the graph? Maybe it's a couple commands, maybe that somehow gets wrapped into one single script. But you know, I would argue that, you know, kind of the graph node itself isn't necessarily too difficult to spin up enough. So it's it's not it's doable if you're indexing the net, it's much more complicated if you're trying to use it for engineering, where you want to index something locally, like you have a fork of main net where you sort of do your own actions and you have a sub graph indexing that we've had a lot of problems with that and that tends to do with the fact that beanstalk indexes a lot of events and it just it's a little bit trickier of a problems, but if you're just trying to see what's going on on a net, it is it is doable. So I guess getting back to the discussion of kind of, you know, sub graph versus custom lightweight indexing solution, I mean, from your perspective, do you feel like the graph node is maybe not very optimized and, you know, it would be, you know, not too difficult to create an alternative that is, you know, sufficiently faster in terms of indexing time and usability. I'd have to study the graphs architecture internally much more in-depth to be able to say whether we could build something that's faster. For what it's worth, the problems that arise from running it locally against a fork, which is something we want to do in development in particular, but is a little bit less important for just being able to display the current and and that data. The problem with that is less about the sub graph itself and more about the fact that it requires so many queries to index all of beanstalk over being stocks history that it it becomes impossible to simultaneously run your own local RBC that basically is capable of having the sub graph talk to it. And also the you I talked to it more or less. So this right now is is not a limitation that it seems will be able to get around anytime in the near in the near term, even using the best, you know like foundry level tools that currently exist. So but that's a little bit separate from from the question of whether it can be optimized further. I'd have I'd say that we'd have to, you know, spend some some time really digging into to understand whether it could be optimized and frankly, whether the graph is even something that's worth forking. For example, just a quick caveat there, because Marshall and I did some research on this topic and we did figure out a way to to let it index without crashing and without blocking the UI. The problem is that it just takes so long after 8 hours, we basically just gave up because we we just didn't want to finish thinking that no one's going to it's infeasible for us to use that as a tool if it takes 8 hours to spin up. But during that indexing, time, it was functional and it was you could still use the UI. So I think the problem now is really just eight hour rate or eight plus hours a week with, you know, we have to replant the scripts to generate a lot of the data which fetch pretty much every Einstein on Chain Bienstock event. And you know, it it didn't take hours to do so. You know, it would probably take something about maybe, you know, somewhere like 30 to 15, 30 seconds and, you know, understand that the graph probably individually downloads every single block and iterates through all the transactions to see if it's a trigger, any handlers, but just would imagine from a, you know, a developer perspective, you know, I mean, personally, I've, you know, found similar troubles where, you know, the graph is incredibly slow indexing. You know, they are working on, you know, some sort of progress project called sub streams, which allows you to kind of build these rust based modules that perform some sort of indexing. And, you know, apparently it's it's going to be much faster. You know, it doesn't seem like it's meant to be you know, it seems like it doesn't seem like it's meant to be a product that might be for kind of small, smaller protocols, you know, or it's not really, you know, it seemed like it wasn't necessarily meant for anyone because building rust based modules is not too user friendly. But I would imagine that, you know, building something, you know what what being stuck will eventually need to support may might warrant that. But, you know, it's unclear. You know, you know, to me it's it's confusing why even the webassembly version is is that slow? Yeah, I hear that. We'd have to study it a little bit a little bit more. I think it's just querying a lot more data. And and with respect to the sub streams, I've seen this as well and I think, you know, deep dive on what they're trying to do. There is, is in order. But you're totally right that, you know, moving into the rust world probably improves efficiency to some extent but also just makes it a lot harder to build stuff. When I last checks on the project, it was quite, quite nascent. So not clear to me from a you know, from an ecosystem standpoint when that tech is going to be ready. But I mean, I'd highlight that we know the graph, the graph team is reasonably well and could probably get, you know, get in touch with them to, to have a bit of a discussion about where they want to target. So, you know, one one, you know, kind of, you know, kind of future looking approach might be to try to do something like this, this axiom which you know, it's just worth mentioning for the sake of, you know, trying to get all the options out there, you know, where it seems like, you know, pretty much they they provide a similar service or it has a similar goal as the graph network, which kind of the goal being, you know, proof of indexing that, you know, kind of that, you know, kind of what is being returned is kind of a function of the theorem. State. And it seems like this per this protocol uses, you know, you know, zero you know, probably smart technology to do so and therefore is able to provide a proof that anyone can run it, anyone can verify to confirm that the that the the data was correct. It seems like it doesn't really support like a general programing language yet but it is something that you know is likely is likely to be forward looking. Although, you know, you know, the contracts are not yet even audited, you know, kind of they probably haven't done much testing and there's likely to be a lot of technical difficulties working with the team. If, you know, it seems like a route to go down. Yeah, I would just say that comparing this to sub streams as well, I think the ecosystem probably isn't there yet in terms of of the tech, but it's good to see that there's a couple different options in terms of other folks who are thinking about this in I think a full investigation is in order. So perhaps maybe the the call to action here for for everybody who's listening, you know, a lot of this tech is getting developed quite quietly in different corners of the Ethereum community and, you know, the crypto community at large. And so we keep a pretty close, you know, finger on the pulse of what's going on, at least as best we can, but would love to encourage everybody to share stuff in the development channel or elsewhere that might be relevant to this effort. If you see it come across Twitter or hear about it from from someone I think like, you know, staying, getting a good landscape of what tools are currently available and also what's being developed under the hood would really help us make a good engineering decision here. So would you say, you know, kind of it is also kind of an inevitability that, you know, languages like, you know, JavaScript to go rust all the popular languages will eventually be verifiable through some sort of, you know, zero knowledge technology. And from this perspective, you know, kind of, you know, building a, you know, a fully custom indexing solution, you know, you know, would be an option as, you know, kind of it would be built with the assumption that there will eventually be decentralized hosting technology that could serve the indexer, even if it doesn't necessarily exist today. In the meantime, you know, a significant emphasis on, you know, open source and making it as lightweight and easy for users to run and should be placed. But, you know, kind of like just trying to, you know, make sure that all the parts are on the table for what it might make sense to start building where, you know, building a sub graph that leverages the graph network. There's already a decentralized network with some sort of proof of indexing that could be leveraged. You know, option two is try to use some kind of avant garde, but still kind of not yet fleshed out solution like sub streams or, you know, zero knowledge protocol. And then the third being building some sort of custom in-house solution that's intended to be lightweight, rentable, you know, a standard node that performs some sort of indexing on the blockchain with the assumption that, you know, decentralized hosting and decentralized proof of computation will become sufficiently generalized to eventually house this custom indexing solution. I guess, would be curious to, you know, to see if there's any other options that people feel like might not have been covered just so that we're all on the same page about kind of what's out there. Can you maybe talk more about what when you say a custom indexing solution means what that would consist of? Yeah, you know, honestly, you know, not too familiar with with the requirements for this technology myself. And if anyone has, you know, experience with these sorts of systems that appreciate them speaking out, but we'll give it my best go. What it would consist of is some sort of data ingestion where it's able to be, you know, read sequentially a lot of, you know, data from different blockchains or kind of generalize different inputs. You know, it would likely be able to do so in an optimized sense where it's able to establish some sort of relations, remembering where the relevant information is that actually correspond to the query, trying to cut out the noise to make it kind of optimized and quick at re indexing, then, you know, it should, you know, somehow be able to combine, you know, a bunch of data input from different blockchains still sequentially, you know, maintaining the myriad data corresponding to the chain they belong to and then establishing some sort of relations on top of that, maybe in the form of like a post cross database or some sort of database that, you know, the mappings are defined and the general execution flow being that, you know, it'll be a set of kind of handlers or, you know, functions that are triggered by each, you know, by the data that it processes from the chain or other inputs. The goal would probably be to have some sort of generalized, you know, way to ingest some sort of, you know, kind of, you know, specification that would be, you know, the sort of, you know, order book that could be, you know, open sourced and anyone could kind of build on top of that would house the, you know, all of the different mappings required to index tractor orders or something like that. You know, kind of, you know, you know, the equivalent of what the sub graph would do would then be, you know, done through this, you know, kind of software and just a reason not to use the current tech to go through the three options is that sub graphs currently are unable to index multiple chains as they are right now is one issue. Another one being the point that our being brought up earlier and that, you know, it seems like some graph development is not very user friendly. You know, first off, there seems to be not great documentation or clarity around how to even get these, you know, get it working on some local instance. And then when it does, it's still incredibly slow. It's incredibly slow in production, making it very hard to iterate. You know, you can find yourself indexing a sub graph and then maybe 2 hours in, there's some point of, you know, no pointer exception or something, you know, that then breaks the sub graph. You know, you might quickly find the bug or it's going to tell you where it broke and you can go fix it, make sure that you instantiate something to it, to some variable, and then you've got to wait 2 hours to make sure that, you know, to have it keep going, you know, which is something that's quite difficult. So, you know, one big problem with the sub with sub graph is that, yes, it can't index multiple chains, which, you know, is something that should be, you know, very important, you know, for for any sort of data ingestion engine. You know, it doesn't seem like stretch sub streams can necessarily either and it doesn't seem like they're necessarily concerned on adding support for it, at least from from from this perspective. If anyone, you know, has has some sort of you know, if anyone knows, definitely please speak up, you know, kind of and then, you know, you know, it's it's it's it's you know, and I don't know enough about them, but, you know, just just would want to make sure that, you know, there was sort of some sort of, you know, collaborative, open source development ecosystem that makes it very easy for a development community to build out around the technology, where the goal is to kind of collectively continue to optimize, maintain and develop the project going forward. And, you know, because because ultimately it's like, you know, it's it's it's you know, there are limitations based on how efficient it is. And, you know, you know, it it's unclear. You know, how much of that sort of community that the graph might have. But, you know, I honestly have no idea. And then compare that to I'm trying to compare the third option of building custom imitation with to a which is to use the sub streams, which is something that the development. Yeah. I mean my understanding is that there's a time issue there, it's an, it's an issue with kind of complexity, you know, think, think the amount of difficulty to, you know, there's a substantial amount of overhead of having, you know, I mean, you know, finding some sort of developer that knows how to do it and knows how to use rust from a actual implementing sub graphs perspective. It's pretty easy. You know, it's in Webassembly and, you know, which is very similar to JavaScript. The majority of developers are comfortable with JavaScript. JavaScript is very easy to use. It's, you know, relatively efficient, you know, and you compare that with these rust based modules which are likely going to be very complex in nature. And, you know, there's not much documentation yet about how these things work. So it's likely to be some specialized skill set that will be required to learn for a technology that hasn't necessarily proven itself. You know, it seems like it's it's it's fast and it's and it's better than what exists. And there's a lot of, you know, hard work that's gone into the product. But, you know, it's also unclear whether it will support multiple chains. And, you know, the ability to support multiple chains is quite important. You know, as as, you know, protocols start to become the point of multiple chains and users are allowed to assess, you know, what level of security they want, especially when it comes to, you know, any market protocol, any exchange, you know, kind of where there's kind of like a why not also deploy this on this chain type of mentality to it, to apply to it. And, you know, and it's likely going to be quite buggy as it is a new product. You know, they'll you know, it'll likely require working closely with their team to figuring out how to do things as there's not too much documentation. But, you know, it's you know they're it's not to discount the the incredible work I'm sure they've done to make sure that this is way more efficient than their previous iteration. And, you know, you know, it could be a great option. And is it just that we don't know the answers or that based on? Because my understanding is that we've spoken to them in some capacity and it doesn't seem like this is going to be ready, particularly with the multiple chain indexing anytime soon. Is that right? That seems to be the case. And so while it may theoretically be a solution to the indexing issue based on timelines, it's probably not particularly in the context of other competitive solutions like Axiom, which we'll talk about in a second. It's probably unlikely to be the long term indexing solution and therefore it's not going to be ready for a year plus potentially the idea that that's, you know, where we're collectively as a community going to bank our hope for jobs doesn't necessarily to be an efficient strategy, at least as I understand it. What would you disagree with that? I don't think so. And so. So maybe let's talk about Axiom a little. And I know that that's something that's been a little bit new for us, even. But but can you talk a little bit about what it I mean, you already mentioned it a little bit, but talk about what it what the implications of the ability to approve computation of on chain data in a zero knowledge fashion, what that could mean in terms of it, an implementation of an order book or some of the other things that we may need to be indexed. What that would look like in that context, and perhaps juxtapose that with a custom solution. So kind of yeah, I mean, so, you know, to talk about what kind of I mean from it so axiom seems to be some sort of, you know, decentralized, you know, provable compute, you know, that can run some level of, you know, JavaScript programs, you know, and you know what's, what's great about, you know, succinct proofs, you know, which which is what zero knowledge technology uses, you know. STARKS And snarks is that it can, you know, in, in relation to it it generates proofs that are much smaller in proving time and in size than the size of a proof. So what it does is allows computation like a lot a greater amount of computation to be given to another party that might have more computational power. And then, you know, the output is returned along with the proof. And the proof is can be verified by you in a succinct fashion. And the size of the proof itself is distinct. And so therefore, it is a mathematical way to kind of have, you know, provable computation start to be passed around and given. Now, what this means is, you know, you want to you know, you want to verify some sort of logic that's running on top of the blockchain, you know, you know, some other, you know, and we get back to the context of some other indexer is ultimately or, you know, compare it to the solution that something like the graph might use or, you know, can you know where it's, you know, you assume that they give you the correct answer such that if you were such that the proof verifying the proof is not succinct, the only way to confirm that the information returned to you is correct is to do the entire computation yourself. And if you have to do the entire computation yourself to confirm that the computation you're receiving is not correct, there's no point in giving someone else the computation as you just have to do it yourself. And so when it comes to fraud proof systems, you know, kind of the first system, which is what is used by solutions like optimism. And Arbitron, you know, is an optimistic system where, you know, someone submits the proof and the fact that someone could, you know, compute, compute the state and submit it and say that they're wrong means that the person is not going to be malicious. But in the zero knowledge sense, you know, it's anyone UI could prove that the state of something is correct. So what this means is that any any node that you know, any indexer, you know, running anywhere, it could be on a decentralized network or not. It kind of doesn't really matter at some point because you as a client can actually prove that the information you're getting is correct. And, you know, it's it's it's still very unclear what what the scope of what could be you know, this technology could be used similar to two way it you know, it's it's very experimental technology. It's unclear how user friendly it will be. It will require probably serious, you know, effort to assess whether it's even a truly viable option. You know, and, uh, you know, so so it's it's a, you know, compare it to option one. Option one, we you know, the technology is known it you know, it's probably going to be the easiest to get going, you know, whereas option two requires serious investment to understand the technology in the first place to even start efficiently building on top of it. Compare that to option three. Option three, you know, can really start in whatever capacity is desired. And, you know, again, the goal being to make it more lightweight over time, you know, but it would require someone who understands kind of this technology to really do a deep dive on how to build it efficiently and well. You know, it would be very easy to spin up, you know, a very basic version. You know, so kind of the scope of the initial project could be kind of determined based on, you know, what seems to be a timeline for when these products want to be launched. And, you know, but but but, you know, you don't get any of the the proof of compute the verify that, you know, you're not able to verify, you know, that the the indexing is done correctly unless it's deployed to a decentralized network. You know, you know, I guess just to talk about what you know, when that might happen, you know, I mean, it seems like this this this succinct proof technology, you know, the snark technology is getting to the point where it's able to fully encompassing programing languages with the ability to prove to compute, you know, the fact that Acxiom can prove so much. You know, a limited set of computation already is quite promising for kind of future proofing it, where kind of the goal be or the goal or eventually the point will be reached where any program written in any language can be proved through some measure of, you know, zero knowledge proof or, you know, succinct proof. Um, and therefore it's like, you know, the consideration that it may not even matter what programing language, you know, such a product is being built in because, you know, there will at some point be some decentralized network that, you know, it could deploy to. And if it doesn't exist, you know, sure, modifications can be made to the zero knowledge technology to make it exist, you know, or, you know, it could be built if needed. Yeah, I would be curious what other people here think about this. But maybe to start with, do you have do you have a thoughts or a preference on a tier list on what what you think are the best solutions or options here? You know, I mean, Oh, it's it's really hard to know. I mean, it probably, you know, and it kind of depends on, you know, when and on what timeline people want to try it tried to build on, um, you know, uh, from, from and, you know, sufficient consideration what kind of needs to be made into, into all the options. But, you know, I think Charlie brought out some points earlier about how, you know, kind of, uh, the centralized hosting on the UI is already possible in some capacity and, you know, thus, you know, it does seem to be that the least common denominator is going to be this indexing solution. But, you know, the chat also brought up the point that alternative, more lightweight, you know, indexing solutions could be used locally in clients in the uh, in the case that you know, in the case that you know in the case that you know you know there's no trustable index or, you know someone who wants to run a fully decentralized tech stack, I personally think, you know, this this axiom thing seems really cool and definitely, you know, sometimes should be spent playing around with it and seeing what it's capable of. You know, think if Acxiom does have a very good kind of generalized interface where, you know, I'm generally against building technology, that's not going to be I'm kind of agnostic to the platform. It's running on. You know, building the rust based modules for the substring seem very specific and very complex for their specificity. Like, you know, the problems with development of the sub graph. You know, I personally don't think switching to rust based modules, you know, is, is, you know, the thing that makes them that much more efficient. Um, you know, I guess, you know, would be it would be curious to hear from, you know, other developers in kind of how how complicated they feel like, you know, kind of this sub graph is and you know, kind of compared to other projects, you know, what they think about it, you know, But on this end, you know, there's there's definitely something very appealing about, you know, building building a local custom solution as well. You know, just if the local custom solution could be built in, you know, maybe maybe a feature down version in Axiom, it might make sense to start there. But ultimately, I think whatever product should be built should be built in the context of, you know, decentralized, verifiable compute. And perhaps that the sub graph, you know, perhaps, you know, it might make sense to build a node that, you know, can handle Webassembly script the same way sub graph can then essentially be a much better node that can process the same type types of inputs. I mean, you know, if I had to pick a tier list, would probably say, you know, if Axiom can do what it should be able to do and, you know, having more understanding around their timeline, you know, I guess it's like they're their code is is is, you know, not not verified or it's not audited. So it can't necessarily be trusted. But that's not to stop, you know, some local indexer from running it. In the meantime, while an audit is in process as problems are ironed out, you know, maybe it might be worth doing a deeper dive on their code base, seeing truly how fleshed out the project is. But, you know, I think it's pretty pretty hard to know and I guess maybe a little, you know, challenge the community to just try to find, you know, any competitors in the space that are working on some kind of, uh, you know, succinct way to, you know, do generalized web hosting to share. And then, you know, if the scope of the operations that Axiom allows you to do is incredibly limited, we'd say it probably doesn't make sense to go with that solution as it seems like, you know, some generalized, you know, try and complete zero. You know, proof of computation is is is the next logical step after something like Axiom. So if so, you know, probably makes sense to go with some fully custom JavaScript solution. You know, unless the community feel strongly that, you know, the sub graphs are sufficient in development environment and you know that, you know, I'd be curious to hear from developers building on it, you know, how much time do they feel like is is wasted with this kind of, you know, waiting for it to index or, you know, curious kind of for thoughts on that front? Or is that is that a wrong sentiment at all? Do people disagree that people find it particularly easy to develop on? Not sure if we have many of the top graph devs here looking around if they want to comment, I would just hop in. But most probably 80% of the what was just said. Awesome. We'll catch you up real quick to Joe. So you know, discussing the general framework for which it might make sense to develop some sort of index or for an order book on top of track there for potentially, you know, to start out with in the context of being stock derivatives pods for the posits and just trying to align ourselves from a long term perspective. Does it make sense to build this technology with the sub graph? Does it make or does it make sense to kind of build some custom kind of JavaScript solution that will maybe, you know, that will ultimately be able be deployed to, you know, some sort of zero knowledge proof of compute system where, you know, kind of, you know, what the sub graph currently allows, allows for that type of, you know, proof of index. And you know, just curious from your perspective as someone who, you know, has dealt with a lot of the challenges of developing with the sub graph, do you feel like there's substantial and you know, I guess if you could talk a little bit about what you think about potentially building, you know, you know, an index or on a custom solution mean, I think subgroups are actually pretty relevant, relatively easy and standardized in a way to to work with as long as you know we've got the right data being emitted inside events because they you know, they were primarily, you know, do best with events in the current implementation of the graph in the graph node. So, you know, we're able to you know, we've actually been able to utilize, you know, you know, custom handling to, you know, get things like the API up and running relatively smoothly. I feel like where it's bogged down has been just trying to re index and go through, you know, and perform an update and get that updates the while for a quick fix. That's kind of where we run into some issues on that front in terms of, you know, creating an index, the order book for, you know, pods or other items within Beinstock. And really I think one question I have is how performant and, you know, managed as it need to be. Are we expecting, you know, similar things like an exchange personally have you know, an API or, you know, be expected to check statuses and then perform updates and have those updates happen in real time because at the end of the day, if it lives on chain, the actual ordering the block is that is the final order, if that makes some sense. So, you know, it's like how, how reactive I guess do we, do we need it to be so playing or updating. Yeah. So maybe to walk around kind of what, what will likely be required given the state of what it looks like? Tractor is proof of the willingness to perform some type of on chain action using a user's assets. So the requirement that the user actually has those assets are there is a requirement that the user actually has those assets for the tractor ordered to be executable If someone creates in order to swap 100 being for 100 USD. C Now I transfer all the being out of my wallet. Now that order is no longer valid. However, if the being comes back to the wallet, then that order is valid and that order needs to be kind of so there needs to be so, you know, very tight kind of communication between, you know, the transferring of assets and ultimately every single token that is going to be supported by the market. Well, you know, the transfers between users with open orders will need to be tracked. And, you know, the number of users with open orders is probably going to continue and grow. Order cancellation is in on chain action, you know, So orders are canceled when someone performs an action on chain. But kind of, you know, when being is transferred, say say we have the case where the 100 being for 100 us there if the being is transferred out of my wallet, you know, the indexer now needs to look at all of the open orders attached to my accounts and kind of update them to see if any of them became inactive or if any of them became active. You know, where where, you know, there might be a lot of efficiency loss is on this transferring of assets. I guess I would ask you the way the subclasses are currently built, is there any way to say only listen to token transfers between this set of users or is that not something that's that's easy. Do that's not something I mean, to listen to any token transfers right now to the best of my knowledge, like you have to listen to that token contract and you just listen for the transfer event. If it's not going to some of you Paramount, you just disregard that process that any further. So what are your thoughts kind of on that, General? What I like, like in a dissention, like if we think of this at scale or this is yeah, you know, ends or hundreds of different tokens being, you know having different orders issued against them within tractor and that would be really hard to scale on the some graph side just from in updating the order on a transfer of assets perspective cause there's no event, you know you just transferring 100 beans out of here, out of your wallet. The only event that's going to be minute is the transfer event on the ERC 20 token, You're not going to have a tractor is not going to have any knowledge of that until it tries to execute an order right. But it would just try to execute an order and you don't have the being. So it's not active and it moves on. Yes. So the the the cost of reporting a wrong order is just that the UI is going to display it incorrectly. Right. You know, kind of there is the approach which should be brought up, kind of the approach of it's like, you know, you store or from the perspective of like without doing all of the indexing and or have to be, you know and so then I guess on top of that, right, like what a tractor order is, is a list of transactions so or a list of functions for someone. So, you know, this order book will need to parse through all the selectors and a tractor order and then determine the ERC 20 tokens that are part of that order, somehow store that reference, determine what are the, you know, kind of, you know, determine that, oh, there's this plot transfer paired with this corresponding pricing function. It knows that the combination of these two function calls correspond to a pod order with this formula and it somehow so so everything that's a part of kind of the pod v2 is going to be a part of this too. And you know, just in general, the goal should be to develop it in a way such that it's sufficiently generalizable So, you know, other, you know, as as more types of markets get created on top of tractor and wells that, you know, the it can be marginally expanded on and you know, more and more, you know, kind of the the order, you know, kind of the indexer could just start indexing more different types of, you know, actions and, you know, kind of grow moderately in the amount of market that's able to support. Yeah. And I think there would be ways to there would definitely be some methods we can get around some of the, you know, limitations on the graph side, you know, like learning like the sub graph could listen it, can learn to listen to any event, you know, any token that has an order created on right where that ends up being a problem as in we've got to start you know, storing a list of all the addresses that haven't, you know have an we're on track to it and then start listening for those performance lines. I don't know. I mean, that could probably balloon out pretty crazy. The other kind of interesting challenge, you know, indexing performance is very dependent on the actual Indexer machine as well. So, you know, there are some people who might have, you know, a beefier, you know, narrower than set up, you know, in servicing, you know, 100 and some graphs on the centralized network versus, you know, beans on farms hosting one cloud server that's just, you know, dealing with ours in terms of those indexing delays. So in theory, as long as the sub graph and the indexer can process events in real time and keep up to date change, there's not an issue. But to generalize everything out and then even to just try to think of like how to evaluate like different price approaches and see if they match by trying to expose that data as well. I would do that on some graph. I'm not entirely sure I'll stop in my head. So my in my I mean, I think it's worth at least exploring a little bit what, you know, what our pain points are and what we what we really need to generalize and out is a lot of the stuff we could do with like a very small wireless, the tokens or something like that. But to generalize everything out, this is where that's where it could potentially balloon up and have have issues at scale on the index inside, I'm just kind of brainstorming, thinking about this at the moment. Very interesting problem to have challenge. So about how long does it take to index the current being stock sub graph on you know one a local you know MacBook and into you know a you know any you know kind of a a server that's a pretty serious machine right now it takes about 5 to 6 hours I think to to resync the entire thing starts on graph from scratch using like a cloud server without with with just making our PC calls back to an hour every I don't have access to a machine that has that there's been some development on some graph side to allow like indexing and performance to happen in parallel rather than having to go through everything in sequence. And I haven't gotten any of that set up on my personal infrastructure yet, so I don't have a timeline on that. But I that's a pretty popular piece of tech that a lot of the major decentralized sub graph indexers utilize. From your perspective, does it seem like there's an active development effort within the graph community to, you know, further improve the graph node indexer and to increase speed ups at the actual node level? Yes, Yes, definitely. Yeah. There's been, you know, so yeah, there's been big improvements to the firehose and they've actually started there's another ask on top of firehose called substrates that they've been you know actively developing improving upon to allow indexers to actually unlock and process subgroups in a much more appropriate manner. So this was brought up earlier, but you know, it is our best understanding that some streams are built as custom rust based modules. Instead of webassembly like sub graphs. Yes, but in fact, earlier today. So, you know, kind of it would be a completely new development tech stack to build on. It would be a in-progress development team. It's unclear how long it's going to take them to add support for on the decentralized network. And you know and and it seems like it's a pretty custom skill set in terms of what needs to be learned in order to implement, you know, kind of the substring. You know, it seems like, you know, that, you know, kind of a lot of the a lot of the speedup might actually come from inefficiencies in the indexer as opposed to, you know, kind of using a more efficient programing language like rust to build the actual modules. Yeah, I'd say that's fair. Yeah. A lot of your performance issues are going to come from either how fast your, you know, your note is that you're querying or the actual know specs of the server doing the indexing, especially when it comes to, you know, processing and doing different levels of processing on that front. Very interesting. And do you have any experience yourself with potentially understanding what would be required, build some sort of custom indexer that would, you know, just be able to somehow ingest data from the blockchain and aggregate it in some sort of query about readable database like a custom like draft implementation. Yeah. And I guess another pain point is that, you know, it doesn't seem like the graphs or sub stream plan to support Multi-Chain indexing anytime soon, you know, tractor in kind of this whole overall like structure needs to be something that, you know, kind of seamlessly supports new chains. It needs to be able to interoperate between them. Users need to be able to kind of set up orders on different chains, etc. And it seems like there's not really too much of an impetus to get that done on their end. And that's kind of the the other pain point when it comes to potentially using a sub graph for this. Yeah. So far as like expertise on some of our custom indexer, that's honestly probably a bit out of my wheelhouse at the moment. But in terms of Multichain support, that is actually like the graphs or the graphs like biggest initiatives that they launched starting here at the end of 2022, they've actually already brought Gnosis Train or are about to bring this chain on to four. What does that mean for for several things. So they launched a consortium. It's basically a multichain indexing program, right? So they've been running test nets with indexers and developers for, you know, gnosis for the last couple of months. And I'm just trying to culture myself here will allow both the indexing of Etherium and Gnosis chain in the same sub graph now. So they're technically separate sub graphs because this is unfortunately one of the pain points, because it's like if it tracks your order is deployed to arbitrage that is intended to be settled on a Etherium. The track to order can be put arbitrage because it's much cheaper to do so it can be done for cents. And the only concern is censorship resistance. So like ideally the indexer would be able to listen to this order creation on arbitrage that's intended for settlement on a Etherium and then be able to simultaneously be tracking the user's assets on a Etherium to make sure they have the required assets. Also, the cancellation event comes from the source chain, so the cancellation event would happen on Etherium even if the order was put on to arbitrage. So any sort. So, you know, it doesn't make sense. It seems like it doesn't make sense to have some downstream service perform to do that, as that is really the bulk of the indexing work determining which orders are valid or not. So from this perspective, it doesn't make sense to have an index or indexing, mainly in an index or indexing gnosis chain and then aggregating those indexes. Probably just better to have one store that can listen to both of them at the same time. Mm hmm. Yes, that flow is interesting. So I can initiate an order on chain or. Yeah. Assets on Etherium. Yes. Or any blockchain, whatever blockchain you are comfortably using. Given that the only concern is potential censorship resistance. Gotcha. So how would I guess how this tracker on the other chain know that it's a valid order? Everything is a valid order until it's canceled. And the existence of your sig, the existence of your signature proves that you signed this payload and therefore it is valid. So a 712 signature, This format is in order and the theorem can confirm that your private key signed this order using your public key, and then it checks that it hasn't been canceled. And obviously, if there's any function call in the tractor order that reverts, which could be a call to some function that checks that it's not canceled. You know, the order won't be able to be executed. Wow. Okay. And this is this is how support works, minus the generic function execution t port has this general format, but it only supports the exchanging of ERC 2011, 55 and 721 tokens just as transfers. Okay, I can. Yeah, that doesn't make as much sense then. I mean, you'd have to, you'd have to add another aggregation layer on top of the two base subtypes as we stamp that and the majority of indexing work would probably need to be done there. So it wouldn't help at all. You know, so kind of if, if a sub graph solution was built, then until some sort of cross-chain support was added, you people would only be able to put orders on the same chain that they're intended to be fulfilled by, you know, so, you know, orders would probably, you know, be something around $0.50 to a dollar instead of maybe 5 to 10 or like, you know, maybe to 1 to $0.03. Right. And you Yeah, that's okay. But, you know, there's you know, the goal is to get order creation as cheap as possible. So yeah, that's, that's a pretty important aspect to want to have actually. People are often canceling orders frequently. Yeah I mean updating and canceling has to happen on main not unfortunately You know in this structure there's really does not seem to be a way to get around the you know but the goal should be to you know make it as cheap as possible. I do not have a clear answer at this point in time as far as what the best the best path forward for this would be, then some so kind of collectively, I guess, to circle back and kind of see how we're feeling. You know, it seems like the substantive solution might not be that great. And you know, the sub Goff solution comes with particular limitations. And you know, there seem to be someone, you know, but you know, who or I don't know. I mean, I guess what open the table. Does anyone want to kind of give, you know, is anyone familiar with what the requirements would be to set up such a node or have a good idea of what the different options are in terms of database solutions and ways to index the chain efficiently? So like to share? Yeah, I just said that all of all of those are probably separate but important questions to kind of cobble together. But you've got like a database layer, you've got some sort of, you know, execution layer that's somehow streaming chain state or has some way in which it pulls chains it from time to time and then, you know, a layer to pass that information and update the database accordingly is kind of the the high level. I guess there's also a piece on top of that which exposes the data with some sort of API. In the case of the graph, obviously it's like a graph tool and point and that sort of thing. One thing I'd add actually that this I don't think some graphs would support in their current implementation, but an independent indexer that we build is at least in the case of tracker orders, I think one possible way to evaluate whether an order is valid is just to use on chain static calls to to verify that the the current you know like basically that all of the actions that are encoded in the tractor order can actually be executed. This kind of navigates around some of the technical problems with respect to passing the order itself, understanding what's included in it, and then somehow detecting when like the conditions that it specified are no longer true. An example would be like a tractor order that has a transfer of a token in there, like a more complex paradigm, you would have to somehow detect that it was transferring a token and then, you know, like basically track if the user no longer has balance of that token. But one way we could circumvent that is just to like basically test the whole thing in a sandbox, which also has some other side effects. So that's for me that what I'm most excited about in terms of research angle. But I believe this would require at a minimum, moving away from the graph and probably not using substring earlier or either as it out. Yeah, I mean, with that angle, you know, one thing that becomes or one thing that you know, is me is a little hard to do is say you created some pod pricing function such that it only supported pods between 500 and 550 million in line. You know, so so you know, in the sandbox would have to know to test it with like how would you determine that 500 to 550. It seems you would have to decode the data to know that that's what the range actually is, unless you're trying to do some sort of iterative process. Yeah, I'd have to think through that case more in depth. Yeah, that's kind of where, how, how people are going to decode all the data and match everything up and make sure we get updates across the. Yeah. That's, that's going to require somewhere from one sort of add there is that I think developing a list of specific like tracker, a common tracker order that we want to index into a market. Uh, by indexing to a market, I mean you take some set of orders which encode like a series of transactions that's, you know, sort of common or shared amongst them and then index those and provide a UI for it. One example would be, you know, a pod market in which bits can be placed in bin deposits. I think driving to a list of those examples will be immensely helpful for thinking through in practical terms where the current like ideas start to fail. So like what you just mentioned, progress is a really good a really good point. We should, I would say like to me I'm in terms of my thinking, I'm probably going to start with the bean deposits versus pods paradigm, but I would encourage others to suggest, you know, other sorts of tracker markets which we might want to think through. Yeah, you know, I mean, yeah, the other being stock derivatives, namely fertilizer in deposits, you know, perhaps some fertilizer versus slash bean and some where, you know, it can use deposits or being and some you know, you're 20 versus deposits also really think some sort of convert order book would be cool where farmers can set kind of limit orders that what ratio or like at what price they're willing to convert some of their liquidity some sort of auto plant system that allows farmers to determine you know you know, they're willing to pay someone some amount of beans or deposited beans in exchange to plant given some grown stock number or some you know number of earned beans and or both. And you know, with the you know, and then with the expectation that there would be some bots that are somehow performing this indexing either themselves or on some or you know, on some local indexing solution and executing these converts on behalf of the users. And it would be great to be able to see that that convert order book in the UI of these limit orders, you know think kind of so kind of those five different use cases open up a lot of I feel like kind of options for kind of how tractor might be used and specifically, you know in the context of bean stock. Yeah. One other approach to highlight here is that I think we've thought about this a lot in terms of, you know, the most generalized index here, but we might go study some of, you know, other applications like Defi Llama or there's a quarter of sites that index dapps and prices, stuff like that. How they create a sense of modularity where, for example, we could work on a base indexer of some kind, but it's up to the the dapp developer, I suppose to expand upon that in such a way that it's capable of, you know, decoding whatever elements of a transaction they want to have decoded. So I guess in the case of the convert order book, right, like, you know, the layer that they would be adding is parsing of, of the convert parameters that are encoded in the tractor order and that makes sense. But there's layers to this I guess is what I'm saying. Cool. So you know I guess you know it seems like you know like yeah more exploration and more research needs to be done into a custom solution, but also seems to be the way the, you know, the community is generally leaning in terms of what it seems to be the better solution, you know, just just to you know, just to bring it up. You know, there is some potential opportunity to create some, you know, subgroup that may be limited in some capacity to one to maybe three markets that will kind of service bienstock sufficiently long enough time to really spend time getting the indexer solution right. At the same time, you know, if the custom indexer is the goal, there could be some, you know, more short term product that's maybe not as efficient but you know, able to, you know, index, you know, kind of not as efficiently and maybe not as generalized as ultimately will be the goal, but is some kind of proof of concept working version, I guess, you know, kind of as I'm thinking in terms of what makes sense to eventually be the goal for kind of the first, you know, kind of version implementation of this kind of custom indexer, you know, what seems to be a good kind of time frame to start thinking about the scope of this project. And, you know, if that time frame is is on the longer end, does it make sense to kind of work towards some sort of MVP sub graph in the short term? Yeah, I'm happy to share my opinion here, but I'd be really curious to see how other people feel. So I guess the preface, the code for tractor as it currently stands has been the initial version has been written by a developer. It has not been, it has not gone through team code review yet, which will be a very important process and then also would need to go through some audits to really drill down. But I think it lease code review is necessary before we can really, you know, have some sort of interface look at the contract side in order to make in order to make indexing decisions. And we may want to do a little bit of that process in parallel such that we can, you know, adapt the contract functionality slightly if necessary to accommodate what indexers need. So that's the first piece. I think there's some work cut out for us there in terms of how to approach this. I think my take is like if we want to back into a system that is going to be generalized over the long term, I would suggest by developing a list of interesting attractor markets which we want to develop in the short term, I'd say pick like, you know, four or five options and then let's pick one to start with. I think all of the ones mentioned on the call today are great, but probably a point of additional brainstorming. And in my mind, once the tracker code has been reviewed and goes to audit, I think that, you know, a working version of an indexer of given my understanding of the of the code base for tracker in particular, assuming that we can narrow the playing field to a couple types of markets is probably something like a couple of months of work. Even if building a custom solution, I think we could come up with an MVP version that is performant and works for for those cases and provides us a lot of information in terms of how we would want to go, you know, either grow that particular piece of code or switch to another implementation or something of that nature. So in my mind, like we can kind of kill two birds with one stone by rolling out something that's valuable as a market and also learning, you know, doing some engineering research and learning in process of building that. So I would that's what I would point towards. I think that would put us in a world where, you know, call it I don't know how many months from now, but say like, you know, three months after getting the tracker code to to audit or we have a working implementation that can go actually get get used. And I think from there will be in a much better position to to chart our course for in the longer term piece of infrastructure. I think a futures market for being would be interesting to implement also potentially also for stock to fill that that initial use. But some of the other markets that have been mentioned are also very interesting to consider. And least an online person. I need to I need to dive in and I channels and way just in a better feel for the track the code is and how how this flow works. And then yeah, especially like honing in on you know few targeted markets to make sure that are there to validate that that it does work and that you know a model or approach that we take with those can be sufficiently generalized as well then that feels like a good starting point on land, I guess. Does anybody feel like that approach is incorrect and we should be thinking longer term from the start, or that the markets I mentioned over the past half an hour are, you know, not the right ones and we should do something else or I'll be asked a good question in the chat, which is are these markets that we will build or that we hope others will build? That's a good question. I guess I would say that given that Bienstock Farms has already developed the pod market and an open source capacity in particular, it would make sense to use Tracker to extend the pod markets functionality with respect to other markets like that. The futures one and others that were mentioned, whether Bienstock Farms deploys or operates, whatever that ends up meaning those is. The thing is that totally different question from using them as a an exercise in engineering where we think about what it would take to use tracker to do that type of transaction. I'll be in. Do want to ask your question from earlier about the the value of prioritizing censorship resistance at these other layers in the stack. Yeah, sure. I guess really my question is just even more higher level after the whole tornado casting. It really just made me question what anyone is doing in terms of decentralization and censorship resistant like efforts, because up until that point it feels to me, at least from my point of view and this is why I want some other points of view. It feels like up until that point we all had this like veil of belief that, you know, as long as our tech is decentralized and, you know, we can shut down all the nodes, contracts are immutable, you can't kill a contract, blah, blah, blah, blah, blah. But tornado cash kind of just came down and proved that regard us of whether the protocol is still running or not. The government will completely ban you out of its existence. Right. The rules that you put in place for for for tornado cash to completely blacklist them and they blacklisted anyone that touches that protocol. Right. You touch that protocol, your ad just gets blacklisted. Exchanges won't talk to you. Right. So they're effectively better than dead. So my question is, if that happens to Beinstock, are we is the assumption here that we're going to continue to operate? So we're trying to build our systems to live in that scenario, because that seems to me personally, that seems completely fruitless. But I'm I'm probably wrong. So I want to understand how do others like reason about this? Like why are we trying to be censorship proof when it's been shown that it's kind of irrelevant? Government can still shut you down to a longer discussion. I think that this is the whole problem with the public is I think we're losing you a little bit unless it's just on my end. It's not just you. So I guess what we're waiting for him would just go off and say, you know, I don't think the tornado cash situation was a failure. You know, everyone who's still you know, if in tornado cash, I'm pretty sure they still have that. If they're still able to withdraw from tornado cash. Sure. They're able to move, you know. Sure. They're not able to then move that currency onto a centralized exchange. Enter Some validators at the protocol level are censoring transactions that contain addresses on the o fac list. But from from my perspective, everyone who participated in Tornado Castle is still able to use the Ethereum blockchain. The only reason there's a problem is because they have to afford their money from the blockchain in order to spend it in the real physical world. And thus, you know, restrictions are able to be put on that process. But I think, you know, that kind of vector becomes less and less of a problem as more and more economic activity start to happen on the blockchain. Sorry, I got booed. My Discord app has been super funky. Yeah, I mean, not sure I know we're at the two hour mark, but on this end, the way one of the ways I think about it is that sort of is drawn from the differences between something like Tornado Cash and Beanstalk and that because beans are, you know, if everything works out, you know, the ubiquitous foundational money layer, by the time that, you know, governments do attempt to censor beanstalk, I guess the way I think about it is there's some there's some sort of a, you know, escape velocity phenomenon that happens before and during that. But to publisher's point, the core problem is that the economic activity that people want to participate in is not currently on chain. And when it is, you know, the problems that you outlined, tornado cash are significantly lessened in my opinion. The tornado cash thing is a reminder of how far we have to go. All right. I feel like this has been incredibly productive. If don't want to. I think we're I think we're losing you again publicly. But I think I got the message. But, yeah, I feel like this is very useful. And, you know, I'll be in if you feel like you know what you surface is worth continuing discussion and a future call as well. So you should feel free to surface that. Thanks everyone for for coming. The recording should be up, you know, within a day or so. Thanks, everyone.