💻

Beanstalk Dev Call #7

Date
February 23, 2023
Timestamps
0:00 Intro • 1:08 Seaport Developments • 3:03 Using Seaport with Beanstalk Asset Types • 6:14 Motivations for Tractor • 14:02 Suitability of Seaport • 16:45 Standardizing Beanstalk Assets • 19:48 Plots as ERC-1155s • 36:51 Support for ERC-1155s Across DeFi • 44:27 Using Separate Contracts and Where to Keep State • 55:35 Deposits as ERC-1155s • 1:12:49 Changes to Indexing System • 1:22:14 Storing Metadata Off-chain • 1:40:09 Moving Forward
Type
Dev Call

Recordings

Meeting Notes

Seaport Update

  • Last week, it was brought to the community's attention that Seaport recently introduced support for arbitrary orders through the use of Zones.
  • The whole purpose behind Tractor (which was loosely based on Seaport), was to be a fully arbitrary off-chain order book with on-chain settlement. The initial deployment of Seaport supported a subset of that functionality, but it didn't necessarily support arbitrary functionality. That would entail the ability to perform function calls to other contracts in the process of order fulfillment.
  • It seems like in Seaport 1.2, they introduced this concept of custom Zones, where you can specify external contracts to interact with. Now there is version 1.4, and it's likely there has been additional iteration.
  • If Seaport is sufficiently generalized enough to support the majority of markets that could be built around Beanstalk, then there's no need to create Tractor at all. There's no need to reinvent the wheel.
  • The limitation here is that Seaport only natively supports three different types of assets: ERC-20, ERC-721, and ERC-1155. Given that Pods and Silo deposits do not currently implement any of those three ERC standards, it leaves the community with a couple options.
    • One option is to create a fork of Seaport and modularly add support for Silo deposits and Plots.
    • Another option is to somehow extend the interface for Deposits and Plots to adhere to ERC-1155 and therefore allow Seaport to natively support these assets.
  • The work required for both is probably similar in the sense that custom code will need to be written, audited, and reviewed extensively in both cases.
  • In the long run, it probably makes more sense to make deposits and Plots adhere to the standards, so that custom functionality for these two asset classes will not be required going forward.
  • By standardizing Beanstalks asset types, it will drastically reduce the amount of code that will need to be written to accommodate future technologies.
  • A Seaport order originally consisted of a list of transfers to and from the order creator and the order fulfiller. There was some sort of pricing function that defined the relationship between what was fulfilled and what was sent in a proportional way.
    • The first problem with that is there was no custom logic to price assets.
    • The second problem is there was no way to have custom pricing for different token IDs.
  • Additionally, there was no ability to perform additional required actions, such as withdrawing deposited assets to prepare the transaction.
  • There was also no way to add things like complex cancel conditions
  • Tractor was intended to allow the fulfiller's function call to include call data that would then be used to populate the farm function, which could perform arbitrary on-chain actions. It could also allow for pricing along a curve.
  • If it is possible to perform a call to Pipeline, it is probably possible to perform the majority of generalized on-chain functionality.
  • An example of something that Seaport would not have been able to facilitate is the exchange of Silo Deposits and Pods using a dynamic pricing function to value the Grown Stalk based on the season of the Deposit, as well as validating that the place in line of the Pods is within an acceptable range.
  • If Seaport is now able to support this type of order, that's great news. As soon as Deposits and Plots are standardized, Seaport could potentially be used.
  • Another question is whether Farm Balances could be used, but that could be achieved using Depot if Seaport has the capability.
  • Seems like it might be the case that Seaport can support all this, and even if they can't it seems that their team has been iterating quickly. With Seaport 1.4, they're already trying to get some generalized logic structure for the order book.
  • Seaport already has great documentation, an SDK which can be leveraged, and a community of developers building in and around it.

Standardizing Beanstalk Native Assets

  • There are some downsides of extending the interface for Plots and/or Deposits to fit into 1155s that Seaport can trade.
  • The biggest downside would probably be gas considerations. There are many design decisions that would have gas efficiency implications.
  • Security is a concern any time the core Beanstalk protocol is upgraded. There is the potential for new security vulnerabilities to be introduced.
  • Since there is a refactor of the deposit system underway as part of the change to Seeds per BDV, it would be best if these assets are to be standardized for it to occur at the same time to minimize the amount of times the Silo needs to be upgraded overall.
  • Aside from gas costs and security concerns, the user experience will remain largely the same. The main difference is that you will see your Deposits and Plots in your wallet as 1155 tokens, and you will be able to use them on OpenSea.
  • It is possible that they will have to be implemented as 1155s in such a way that OpenSea wouldn't support natively.
  • There will probably need to be some custom implementation of transfer that includes the extra parameters that are currently included in the transfer plot function, namely the start and end range of the Pods that are being transferred. Fortunately there is an extra data parameter that can be used for this.
  • Decoding of the call data that specifies the relevant Pods will happen on-chain, as the blockchain needs to execute the transfer properly. Encoding will happen off-chain, and that's where different UIs will need to implement support for the custom functionality.
  • There would likely have to be more than one transfer event emitted in order to preserve the accurate Pod IDs within a Plot.
  • It doesn't seem that these two custom implementations of 1155 would break the standard, though.
  • OpenSea will treat it the same as a generic 1155, but there could be some default functionality implemented for when there is no call data specified. This would allow at least some limited functionality on OpenSea.
  • OpenSea's focus is on adhering to the standards to support generic 1155s. It might be possible to submit a pull request adding support, but in general the most important part is that Seaport will have the ability to support it. The emphasis is on the Pod Marketplace.
  • It's worth considering whether to create a new value standard intended for ordinal tokens like Plots with the goal of getting it adopted as an EIP or ERC for Ethereum. That would take a substantial amount of work and it's unclear what kind of traction it would get.
  • It is clear that there is a need for Pods and Deposits to conform to a value standard, and doing so is a step forward in the spirit of composability on-chain. Support across DeFi for Beanstalk's native value is becoming a limitation.
  • Most of the protocols implementing 1155s in DeFi make the assumption that all the tokens within a contract are similar in nature, which isn't much help when it comes to something like Plots or Deposits that require a unique dynamic pricing function. They mainly just flatten non-fungible tokens (1155s, 721s) into fungible ones, so they can effectively be treated as ERC-20s.

Silo Upgrade

  • Should both Deposits and Plots need to be implemented as ERC-1155s concurrently with the Seeds per BDV upgrade, or is that specific to Deposits?
    • Mainly deposits, but depends on how 1155s are implemented.
  • There are 4 options for implementing 1155s that would determine which contracts are responsible for being the 1155 contracts themselves (whether Beanstalk is the 1155 token or some new contract is deployed).
    • Both Pods and Deposits are combined into a single 1155 implementation under the Beanstalk contract. Probably the most complex and convoluted, but the cheapest from a gas perspective.
    • Using Beanstalk as the contract for one of Pods or Deposits and having the other use their own 1155 contract.
    • Both Plots and Deposits using their own 1155 contract.
  • Fertilizer is an example of an 1155 with its own contract. Whenever to transfer Fertilizer or fetch your balance, you call the Fertilizer contract. When Fertilizer is minted or transferred, the Fertilizer contract emits the event.
  • Every contract is capable of implementing each ERC standard once, so Beanstalk as a contract could be upgraded to be an ERC-1155. For it to be extended to be two 1155s, the transfer function could be implemented in such a way as to check the first bit of call data to determine the appropriate 1155.
  • The state of Deposits and Plots is currently stored within Beanstalk, so if the decision is made to move Plots to some external separate contract, a decision also needs to be made whether going forward all plots should be stored in the separate contract, or should they still be stored in the state of Beanstalk and design the 1155 to be a wrapper.
  • Migrating existing Plots/Deposits could be optional, so you could have some that are 1155s and some that aren't.
  • If the state is kept in Beanstalk, there will be additional gas costs associated with transfer. If the state is migrated to a new contract, the additional cost can be avoided on transfer, but it will still have to be paid on mint.
  • For simplicity's sake, it makes sense to leave the state where it is and not migrate it to a new contract. The extra gas cost would probably be worth it.
  • First question is if there will be a separate 1155 contract or will Beanstalk become an 1155. Then the question is if it is a separate contract, is the state going to be moved or kept in Beanstalk.
  • Publius recommends leaving the state in Beanstalk, especially for Deposits given all of the complexities involved. Plots are much simpler, and it might make sense to move that state to a separate contract, but migrations are risky from a security perspective and require a lot of work.

Deposits

  • Beanstalk Deposits are currently indexed based on token ID or token address and Season. With the migration to variable Seeds per BDV, the index will change from a Season to cumulative accrued Grown Stalk per BDV.
  • The token address specifies what type of deposit it is, and the other index defines how much Grown Stalk has accrued on top of the deposit.
  • Any implementation that Deposits uses should be future proof, and it is an inevitability that Beanstalk will need to support 1155s and 721s as whitelisted tokens in the Silo.
  • The domain space of the 1155 standard is 32 bytes, but in order to accommodate Deposits that include 1155s and 721s 64 bytes would be required.
  • The idea of creating a value standard with variable or infinite length domain space is interesting, but the next best solution would be to concatenate the token address + Grown Stalk ID + token ID and hash it to fit in the domain space of 32 bytes.
  • The problem with hashing is that it is a one-way operation and there's no way to determine the inputs with just the hash. So the problem becomes how to maintain the relation between the 1155 ID and its metadata.
  • The simplest way would be to store it as a mapping on-chain. The problem with that is that the cost of on-chain storage is quite high. It would increase the gas cost of every Deposit.
  • The alternative would be to perform off-chain linking of token ID to metadata by parsing events, since the metadata is already emitted through the add deposit event.
  • The safe transfer function can enforce that the metadata is encoded in the call data and verify that metadata is valid by decoding and then hashing it.
  • This highly customized implementation would break native compatibility with other UIs that don't support custom 1155s.
  • The first question is if this idea of hashing the metadata and using the hash as the ID of the token is sound. The second is if it should be a new 1155 contract or if Beanstalk should be the contract. The third question is if the metadata should be stored on-chain increasing the gas cost for all depositors, or should it be assumed that the metadata is computed off chain using events and input into the safe transfer function as generic call data.
  • We could wait to implement the hash until after 1155 and 721 tokens are added, but then there would end up being two different token ID systems and that would end up being confusing.
  • For Deposits, token address, Grown Stalk index, and token ID should probably be attributes of the metadata accessed by the URI function. There is a tradeoff of some centralization, but from a censorship resistance perspective the metadata can always be calculated from the on-chain events.
  • It might be worth the tradeoff to accept a some centralization in exchange for avoiding the increased gas costs for all users that would come with storing the metadata on-chain.
  • If it is an important first principle to have the metadata stored on-chain, it might make sense to start with only ERC-20 support in the hopes that by the time 1155 and 721 support is needed there might be mitigating factors that make the gas costs less significant (some sort of network upgrade or migration of Beanstalk to a ZK-rollup, for example).
  • The gas costs to store the metadata on-chain could be somewhere in the $0.50 to $2.00 range.
  • The lack of the ability to return the metadata using an on-chain function might be an obstacle for other protocols that build on top of Beanstalk.
  • There is a way to use The Graph to make a simple API endpoint that would take an ID and return the inputs.
  • Most urgent decision is whether to change the indexing method to be a hash.
  • If concatenation is used for ERC-20 tokens, the switch to the hash structure can be done later.
  • Another open question is whether there should be some kind of batch migration at the time of deployment, or if people should individually migrate their Deposits.
  • Since the 1155 implementation would be required to build a Deposit marketplace with Seaport, it might make sense to move forward with the full implementation as part of the Silo upgrade that pizzaman is working on. At a minimum, changing the indexing system would be beneficial, though it would be less elegant to do it separately.

Transcript

Hey, everyone. So I think we're sort of figuring out what we're discussing a day on the fly. But I think the you know, originally about a week ago, we wanted to spend some time discussing all the markets that we should be trying to build on top of the, you know, problems that our wells and tractor markets for sale, yield rate swaps, fertilizer, etc.. And I think that through those discussions we sort of realized that it's probably necessary to talk about how being stock native assets like pods, for example, should be traded, i.e., you know, should they conform to existing I.R.S. standards, should we propose our own, etc.. So I feel like they you'd probably be a good candidate candidate to better talk about what the problem is and what sort of trade offs we should be taking into consideration. Yeah, So, you know, kind of first off, you know, on the dev call last Thursday, you know, it was brought to the community attention that, you know, Sea Port recently introduced some sort of support for arbitrary orders through the use of zones. Now, the whole purpose behind tractor and tractor was loosely based on seaport in terms of, you know, how orders are created and, you know, generally how they're fulfilled. But, you know, kind of the idea for a tractor was to be a fully arbitrary, you know, off chain order book with the on chain settlement. Now, the initial deployment of seaport supported a subset of that functionality in the sense that it allowed arbitrary transfers of tokens between two parties, but didn't necessarily support arbitrary functionality Where arbitrary functionality would entail would be the ability to perform some sort of, you know, pipeline function call or function call to any other contract in the process of order fulfillment. And, you know, target the output of that action to either the fulfillment or the offer creator. Now it seems like in Seaport 1.2, they introduced this concept of custom zones zones being some you know, you specify an external contract as a zone and make some function call to that zone. You know, I personally have not had time to really dig into the weeds of what's possible with Seaport Zone System, but, you know, now there is version 1.4, so I'm sure there's been several iterations along the way and kind of where this all you know, how this all relates to kind of what's going on is, you know, if sea port is sufficiently generalized enough for to support, you know, the majority of markets that, you know, would you know that it would be great to have built, you know, in and around Bienstock then there's no need to create tractor at all. You know, if there's already a protocol that can do the functionality that is desired, no need to reinvent the wheel. The kind of limitation here is that seaport only natively supports three different type of assets I.R.S. 20, I.R.S. 1721 and ear C 1155 Given that pods in deposit do not currently implement any of those three I.R.S. standards. It leaves kind of the community with a couple of options here. Option one is to, you know, create some seaport fork and, you know, marginally add support for deposits in plots. Um, you know, option two is to, you know, somehow extend the interface for deposits and plots to you know, kind of adhere to. I.R.S.. 1155 Um, and therefore kind of allow seaport to natively support these assets. Um, you know, kind of the work required for both is probably similar in the sense that, you know, custom code will need to be written in both cases and the audit will need to be performed in both cases. Code review will need to be extensively performed in both cases. Um, and you know, when thinking, you know, 2 to 3 years down the road, it probably makes more sense to, you know, make deposits in plots, adhere to the standards so that you know, custom functionality for, you know, these two asset classes is never going to be required to be implemented going forward. Um, for instance, DPO, you know, has custom, um, you know, permit and transfer functions for plots and deposits. If ports and deposits, you know, adhere to the 1155 standard, then there would be no need to add custom support for these. So in the long run, by standardizing buying stocks, you know, asset types, it will drastically reduce the amount of code that will need to be written any time some sort of technology is built with the intention of having it support, you know, transferring, you know, our use of arbitrary on chain value specifically as it relates to being stuck. I guess, you know, a pause here for any kind of questions and comments. And can you give an example use case of what tractor was intended to support that the your original understanding of seaport would not be able to. Yes. Um, so from from my understanding Seaport, you know, kind of a seaport order originally consisted of multiple transfers. It was, you know, a list of transfers from the order creator to the order fulfill are in the lists of transfers from the order fulfill or to the order creator. You know, there are some kind of predefined pricing function that, you know, kind of was used to define the relationship between the amount that was fulfilled and the amount that was sent from the order. So it used some sort of proportional system to where if, um, you know, you fulfilled 10% of the order, then 10% of the items got sent to you. Um, you know, kind of the first problem is there's no kind of custom logic to kind of price the two assets. The second problem is, you know, there was there's, there seemed to be no way to have custom pricing for different order IDs or for different token IDs, you know, for something like plots where you can imagine the index of the plot is the token idea. There's no way to create some pricing function. That's a function of that idea. Instead, when creating an order, you would just specify all of the acceptable IDs, store them, you know, and put the Merkle root of that on chain, which means you know it. There was no way to have the order priced on what the actual I.D. was. In addition, you know, if there were any kind of, you know, specific actions that were to be required in the order, you know, perhaps, you know, the person creating the order has deposited beans and the person for filling the order, you know, or the order is denominated in beans, you know, and therefore some sort of withdraw is required to prevent the transfer of beans. Now, I guess using pipeline, it could always happen after the order. But, you know, kind of there there is no ability to perform some kind of arbitrary, uh, you know, on chain action, you know, to facilitate, you know, you know, in the middle of the order. Um, you know, things like complex council conditions where it's like you have, you know, these 10 to 20 orders and you want two of them to be able to be fulfilled or the sum of the amount fulfilled across all of them to be less than or equal to 1000. Say, you know, these complex, arbitrary conditions did not seem possible. Um, you know, tractor, you know, was going to be implemented, um, you know, as basically a farm function to, um, you know, either as a facet in being stuck and potentially as a standalone contract where it just performs a list of on chain function calls with the ability to kind of copy call data from, you know, kind of different pillars, function call the call date of the fulfillment function, call and use that to populate the farm function call. So this farm function call becomes some sort of, you know, composition of the data, you know, that's originally created in the order and the data specified by the farm filler. So, you know, kind of by doing that, you know, kind of it would allow in the basic goal is, you know, to allow something like a pipeline call to occur if it's possible to perform some sort of call to pipeline that it's probably possible to perform the majority of generalized on chain, you know, functionality. You know, I think a really good example for where to kind of think of this, you know, of what of what the original C part would not allow is, you know, kind of in the case where there some sort of, you know, plots for deposits, market people want to, you know, trade plots for deposits and you know, someone wants to create some order where, you know, any of their deposits is priced with some sort of Y equals M X plus B formula where, you know, X is the amount of grown stock, you know, B is the the price that they're using to price the BTV. And then X is the amount of Yeah, yeah, X is the amount of seasons or grown stock that has accrued on top of the deposit. And then M is the price of grown stock or the value of grown stock. And on the you know, they're trying to buy a plot and they want to use some complex maybe the polynomial pricing function that was implemented for pod market B three they want to use that in their order such that, you know, only a plot that you know, is is accepted by the pricing function. Um, you know, can be sold into the order and the price at which it sells into the order is determined based on the return value of the pricing function. Now in pipeline, how this would occur is, you know, the for filler would specify the plot that they're trying to sell into the order when they go to fulfill the transaction. Now, in the order, there would be pre encoded data specifying to copy the plot that was input by the fulfill or as called data into the pricing function and into the transfer function that is, you know, encoded in the list of pumpkin calls or pipeline calls. I think it's a pumpkin. I don't know where that came from. Pipeline calls that is originally specified by the order creator. So kind of the pricing function would be called with the plot input by the fulfill or the pricing function would return. It would either revert if the plot was invalid or not in the specified range. Maybe the the order creator only wants 500 Miller less in line and it would return the price which you know pipeline using the advanced, you know call data functionality where it can copy return values would copy that return value, you know multiplied by some amount you know into the transfer deposit function call. It would probably have to do a little bit of math to determine, you know, price in the grown stock, etc., if desired. But maybe the grown stocks not priced. But the point is that, you know, there was, you know, the conditions which make this order valid are kind of sufficiently arbitrary. Now, kind of this is where the open question is like could support port support in order of this type. And ultimately the goal is to kind of try to try to first, you know, determine whether that's the case. And if it is the case, then, you know, that's great news. And, you know, kind of then, you know, kind of upon standardizing deposits and plots, they immediately have support for C part. Now, one more thing to know is that, I mean, you know, also the question of whether farm balances can be used is an open is an open question. But would imagine that kind of, you know, through some sort of DPO call where in DPO you transfer your farm, balance beans to pipeline and then Pipeline four fills the order through seaport. It would still be possible to use farm balance in seaport natively. So from that perspective, you know, assuming C Port is capable of doing that, which, you know, it seems like might be the case and you know, if it's not the case, you know, it seems like their team has been iterating pretty quickly there already and see point one, point four, you know, on trying to get to some generalized, you know, some generalized logic structure for the Seaport order book, you know, benefits of using Seaport are you know, there already is great documentation on it. They already have an SDK which could be leveraged, you know, and there's already kind of a community of developers building in and around seaport. Um, so, you know, kind of for, for Bienstock to start from scratch and creating its own on chain order fulfillment engine even if it you know is sufficiently or you know, substantially more simple in kind of the amount of parameters, you know, sea port seems to have a lot of fluff in terms of like optional the parameters that sometimes get used, but also has, you know, sufficient like a lot of different optimized gas efficient options for people only using a limited subset of the functionality. I mean, so I guess does that answer the question in terms of like what is the functionality that your was intended to do that might be covered by seaport? It does. So what would be lost or what would be the consequences of the downsides or other of extending the interface for ports and deposits to fit into like 1155 so that seaport can trade? Yeah, You know, the biggest downside would probably be, you know, gas considerations. Um, you know, when getting into the discussion of how to actually implement them. Yeah, there are several design decisions to be made which, you know, kind of all impact the gas cost of, you know, you know, sewing, transferring, depositing, you know, when the 1155 gets minted, etc. we'll get into that. Aside from gas cost, you know, security concerns any time the base. Yeah. The core Bienstock protocol is upgrade there's you know new potentially security vulnerabilities as there's new code being deployed on chain. Um, you know why this topic is, you know, relatively urgent to discuss is that you know currently there's a refactor going of the deposit system in Bienstock which you know, is a part of the whole variable seed's per BBVA change. And you know, if the decision is made to migrate, you know, to move deposits over to an er C 1155 structure, it would be good to do that. At the same time that the seeds for BDB changes done to minimize the amount of times the silo needs to be upgraded overall and just structurally it would probably be, you know, kind of a pretty significant change. So you know when to group two significant changes into one would probably be ideal to minimize risk I guess kind of implicate. But you know, other than potential security implications and increased gas cost, you know, the user experience will probably not change except for the fact that, you know, now when you go to Etherscan, you'll see your deposits as 1155 tokens in your wallets and your plots and you'll be able to use them on, you know, opensea presumably, however, you know, one thing we'll get into is kind of like the arbitrary conditions around, you know, that deposits in plots will need to be implemented in to support the 1155 standard might make it so that t port doesn't natively you know won't natively work with deposits but you know, that's something we'll get into later on in the discussion. Do you mean sea port or opensea at the very end there apology would opensea Yes, Opensea wouldn't support them on. Maybe they get into why a little bit, you know, kind of let's take let's take. Plotz You know, as the example here, although for deposits again, something with several design considerations that can be made down the road. But you know, on the port side, it's, you know, plots are this ordinal vary this ordinal structured asset where, you know, there are multiple pods in a plot, but those pods are numbered 1000, 1001, 1002, etc.. They're not. If you have a thousand pods in a plot with an index of 1000, it's not 1000 out 1000. It's, you know, the ordinal you know, there's one pod each number between 1002 thousand. So when it comes to transfer, you know, say you're transferring some of that, you know, 1000, you're transferring some of that pot plot, we'll say 500 in the normal 1155 standard framework. One would expect the results of this transfer to be that I have 500 pods at index 1000 and you now have 500 pods that index 1000 because I transferred you 500 of my 1000 at 1000. That's the standard, you know, implementation of the transferring of less than 55 tokens now plots given its ordinal city would probably need some custom implementation of transfer from that includes the extra parameters that are currently included in the transfer plot function, namely the start and end range of the pods that are wanting to transfer to. Now, fortunately, the transfer from function in Earth Day 1155 includes this extra parameter that can be used to in to, you know, I guess input arbitrary call data into the transfer from function. So you know, quickly going to paste the function signature in barnyard Chat. Give me a quick second just so everybody can look at this and it might be helpful for people to actually, you know, open the standard up as we're kind of talking through this. So the safe transfer from function in a post here, you know, kind of as specified and you know, kind of has this extra data parameter, you know, And so kind of what that means is that, you know, kind of if plots implemented the transfer, you know, I'll own 55 standard, the start in any parameter could be encoded and passed into the safe transfer from function in this data parameter and the implementation of the safe transfer from function. Knowing this is the plots 1155 standard contract would know to decode that whole data into the start and end parameter and then transfer the correct amounts. Now kind of I guess the second caveat worth mentioning here is, you know, whereas the norm or functionality from an event perspective is again, going back to the example of how it works, normally I have a thousand ID, 1000, I transfer you 500. The event would be, you know, omit, transfer, you know, from me to you with this idea amount 500 in the case of a plot, you know, not only do you have to use this custom, you know, call data to, you know, in the transfer from function call, but the corresponding events that are going to be admitted are likely A instead of this single transfer from event, it's going to be three events that are probably first burning the plot. I have, you know, burning 1000 that 1000 and then minting, you know, whatever range the you that I sent and then minting the the the rest of the plot to me. So say I sent you the first 500, you know, it might it would either be a and I guess this is another topic for discussion, but it would either be a transfer or 501,000 from me to you burn the other 500. But, you know, probably what makes the most sense is to just have a burn, you know, transfer it to the zero, address 1000 that 1000 mint to you, 500 that 1000 mint to me, 500 at 1500. Just so it's clear, you know that I now have 500 at 1500 and you know how have 500 other thousand. But kind of the point is the standard a kind of one transfer event per transferred would be broken here. Now, you know, upon reading through this 1155 standard, from my perspective, it doesn't seem like imposing these two kind of custom implementations of 1155 break the standard. But if anyone sees verbiage in here that would specify otherwise, please feel free to bring it up in share. You know, as there's kind of quite a lot of, you know, conditions in the rules in this standard. But I guess getting back to the point from open seas perspective, it is my assumption that open sea is not going to you know, that you know, there you know, that is it is required to enter this custom call data when transferring a plot. NFT. So, you know, the plot NFT contract would probably, I guess, transfer the whole thing. I mean, there'll probably be some, you know, unique specific logic, you know, in the case I mean, I guess it's like there could be some unique specific logic in the case where no call data is specified and maybe that's, you know, just transfer pods from either the beginning or end of the plot such that, you know, when OPIC has opensea by default, you know, say you list, you know, some pods on opensea, you know, 1000 index, 1000 opensea is not going to know how to set that call data to specify the start and end range as opensea you know, just assumes this is some generic 1155 standard contract. But I guess, you know, to the point where there could be some default functionality implemented if no call data is specified such that they'll still be some ability to support plots. You know, on opensea. So from that perspective, you know, I guess Opensea would support limited functionality in a sense that, you know, it would support either transferring the starter into the plot. But you know, with plots, we already have the ability to do very simple orders through the, you know, the pod marketplace. So, you know, don't think that's that big of a deal on the deposit side. There is definitely a bigger open question. And you know, that goes back to the you know, the question of, you know, how our deposit is ultimately stored on chain. And I guess, you know, or maybe perhaps before we get into the deposits, you know, any questions, comments, thoughts kind of on plots, you know, confusion around this implementation, thoughts on whether it fits into the specification and thoughts on kind of how Opensea is going to behave with this kind of, you know, implementation. I guess I'm a little bit confused on the you know, you mentioned encoding in the call data like the start and end of the plot. I'm not totally sure a question to ask, but I guess my question is like who or what is responsible for for decoding that? And if you're using an interface like Opensea, like how would you, you know, as the the beanstalk DAO, how would we get Opensea to interpret that data properly? Yeah. So the decoding happens on chain. The implementation of the trip, the safe transfer from function will know to assume certain things around the call data. For instance, that you know it consists of to you. It's, you know, adjacent and coded next to each other. The first one specifying start, the first one specifying. And so what the implementation will look like is, you know, we might have as stated, you know, or it might make sense to add some default functionality in the case where no call data is is, you know, you know, entered or encoded into the function call. But, you know, in the case where our call data is, it will first try to decode that call data into two integers, you know the start and end parameter or the start and length parameter doesn't matter, do the same and then it will basically perform the same logic as the current transfer plot function using the ID, the amount, and then the start and end parameter if the decode fails, meaning it's not the correct length, it's not the correct format, then the function call fails, which is the expected behavior as the entered information is false. So from the decoding perspective that happens on chain, as you know, kind of the blockchain needs to know which part of the plot to send you. The encoding happens off chain, and that's where things get a little more complicated from a sense that, you know, opensea to use Opensea will need to know how to encode the data to call the transfer from function with. And you know, kind of that will, I mean, it probably won't be possible to get opensea to support, you know, transferring parts of plots. You know, kind of it would require, you know, some sort of custom logic specifically for the plot. 1155 contract. Now my guess would be that Opensea probably doesn't support anything like this. As you know, they're focused on supporting, you know, the general standards, probably not focused on specific unique implementations to my knowledge, Open sees UI is not open source. Could be wrong there. If it is open source, then you know, potentially it might be possible to submit some sort of pull request. But you know, in general to me at least, the important part is that Seaport supports the the, you know, the implementation and not necessarily opensea as Opensea is, you know, some short term benefit in the fact that, you know, we've seen, you know, some some volume for fertilizer on opensea, but you know, there already is a pod marketplace supported within Bienstock and you know, ultimately the intention will be to create, you know, a new pod marketplace more complicated than anything. Opensea could support needed natively. And at that point, using Opensea, it all becomes kind of, you know, a, you know, a functionality reduction. This might just be on my end, but sort of feels like we're using pods and plots somewhat interchangeably. And I remember whenever, you know, you would answer questions, you know, in the earlier days of Beanstalk about why pods or plots weren't implemented as you know, either a 721 or 1155. And I if I remember correctly, they had something to do with, you know, if plots were implemented that way that, uh, you know, liquidity would be reduced given that they couldn't be split up. Maybe you already covered this to some extent, but any color there would be would be helpful. Yeah. I mean, you know, just quite frankly, when Beanstalk was being developed, you know, there wasn't too much, you know, information or a thought around, you know, kind of what standard you know, what standard interfaces exist. And, you know, you know how to what the specifications are to leverage and use the different standards. You know, you know, when Beanstalk was being developed, Defi as a whole was still relatively in its infantile stage. And, you know, kind of the importance of I.R.S. standards in Composability wasn't quite a pair, you know, apparent to us at the time. And, and, you know, it's taken kind of a fair amount of research and reasoning to get to a place where it seems like it might be possible to implement plots in deposits. But, you know, kind of so I guess the point is like initially, you know, when Beanstalk was first being developed, you know, it was, first off, not clear what the advantages to using your standards are. And secondly, it wasn't clear how to implement them as standards, you know, given the nature of plots in deposits. And, you know, one thing that's worth mentioning is, you know, there is the alternative solution, and this is ultimately something that the community will kind of need to come together to make a decision on. But there's the alternative solution in trying to create new customer value standards, you know, that become IP or EIA or CS, that the Etherium level, which are intended to be used for something like an ordinal token, like plots. However, you know, there's this kind of eternal question of, you know, kind of when it makes sense to do it yourself. You know, we have, you know, I think the Beanstalk community has seen firsthand, you know, kind of how how hard it is to build, create and iterate on new things that are built from scratch and, you know, kind of to create some new value standard in, you know, in in specifically intended to be used for ordinal tokens like plots, you know, kind of getting adoption of the IP would be, you know, quite a significant task with a lot of overhead. You know, let's assume that, you know, 2 to 3 weeks are spent drafting some sort of, you know, IP for this. You know, how long will it take to get support to support this new value standard IP How will you know, kind of a community form around it? How will attention be given to it? So, you know, kind of only through only because it's been you know, it's clear now that there's a need to kind of have deposits in pods kind of conform to a value standard and that, you know, doing is, you know, a step forward in kind of the spirit of composability on chain. You know, does it really seem like we're at a crossroads here where, you know, kind of, you know, support across Defi for being stocks, native value is is becoming a limitation in in kind of way in use cases across defi hopefully that answers your question that's very helpful thanks. Can you talk about the landscape of adoption of the 1155 standard across Defi at the moment, particularly in the EVM? Yeah. Um, you know, not, not entirely sure what you mean by landscape and you know, but kind of, you know, going by dexs and lending protocols. So Opensea is an opt in order book. They they support the 1155 standard. But what about on chain taxes and what about on chain oracles? And what about on chain borrow when protocols? Yeah, I mean, you know, personally not to, you know, haven't had too much time to look deeply into kind of, you know, the NFT ecosystem in general. But from what I've seen, General lending index support might be lagging a little bit behind. You know, however, it's like the power of support alone, you know, is enough to kind of perform arbitrary trades and on chain actions and, you know, from from my perspective, leveraging Seaport is definitely the biggest value out of adhering to such a standard. From a Dex perspective, you know, an on chain Dex perspective, maybe believe that Uniswap might have some sort of I.R.S. 1155 pool structure, but, you know, kind of the current problem with most implementations of lending and borrowing protocols around these standards, as you know, most of them make the assumption, you know, or perform, you know, allow for fungible, incomplete, usable lending, borrowing or pooling of these tokens by making the assumption that all the tokens within the standard are within the contract are similar in nature. You know, for instance, I believe the sudo swap pool assumes that all NFT is are basically the same and supports the ability to kind of sell or buy any NFT into the collection. So it really just becomes a means of trading the floor price, which is incredibly unfair. It doesn't help us at all when it comes to something like plots or deposits. We're now we're looking at kind of, you know, kind of tokens that are, you know, actually have some, you know, dependent very or some independent variable on which there are kind of nonfungible tokenized to where it's like to put any deposit. You know, if you create some deposit mechanism for a pseudo swap amp, it's like, you know, now you're grouping unrated, being three curved deposits with being deposits would be in three curve deposits and you know, it doesn't seem like any please, if, if something like this exists, please feel free to, you know, shout it out or share it with the community here. But it doesn't seem like there's any like on chain pricing mechanisms that allow for some sort of amp to be deployed, taking in, you know, where use is the idea of the collection, you know, as some store in some sort of pricing function. And you know, from from the lending borrowing perspective, you know, I know there are a few out there. I don't think any of them are substantially, you know, have gained too much traction. But from my understanding, it also performs a kind of similar, you know, or behaves similarly, you know, also on the Oracle side, too, where it really just allows you to borrow and lend at the floor price of the entire collection. And the Oracle is just, you know, providing the floor price. And, you know, when getting into these financial NFT is where it's like the the independent variable is actually something substantive instead of just like, you know, some variation of the same PHP, you know, kind of these existing platforms, you know, don't seem to be able to be leveraged, you know, kind of just, I guess a know around the different standards going forward is that you know, at some point it's it's going to make sense to kind of start building composable technology that potentially can support any of these value standards and potentially any new ones. And, you know, ultimately it will be the goal for, you know, wells and, you know, any, you know, lending protocol that gets built to be able to support all three types of value standards and, you know, having some sort of defined like here are the bounds of, you know, kind of storage space and, you know, definition space for which all different types of values can be encompassed. And then, you know, kind of composing those into a single kind of option where it's like within a well, you specify the token address and the type of value standard that that token address is. So you can have an RC against an 1155 token and there's some kind of abstract, arbitrary conditions by which, you know, ERC 20 tokens and 1155 tokens get exchanged. But you know, that becomes really quite difficult as you know, especially with the 1155 standard, there are kind of arbitrary there's like an arbitrary number of accesses and, you know, implications of those accesses that kind of need to be kept in mind when developing some sort of pricing function on top of them. So, you know, kind of doing anything beyond kind of the turning the nonfungible into fungible, which is kind of what existing NFT borrowing and lending platforms and amms do, essentially reducing a 721 to many RC 20 or an 1155 to an RC 20 and just having some sort of interface that allows for about to happen, you know, kind of compose, you know, composing beyond that, well, you know, still may still still being able to use the fact that the orders are stored on chain, you know, as in Oracle as you know, a tool to prove that, you know, you have an open order or a tool to liquidate into becomes kind of difficult. Thank you. But to the extent that any dexs are bar led protocols are going to support 1155 is in the near future. Having pods and deposits support of 55 would would maybe make insist that they can be used on those dexs and if not on those dex ui IDs at least at the contract level. Is that right? Certainly. All right. So you want to talk about deposits or do people have more questions about pods? When you talk about the urgency with regards to the variable seed's per BBVA change, is that entirely around any design changes to how deposits are structured or sorry? My question is, is that only involved with coming to a decision around how those deposit so deposits conform to 1155 sort or are plots like a totally, totally separate thing. So in this case it would specifically be for deposits. You know however taking plots into kind of the context of deposits, you know, is important from the perspective of kind of which contracts are responsible for being the 1155 contracts themselves. You know, kind of there are kind of three or I guess four different options here where for both pods in deposits, they can be implemented in a way such that bienstock is the 1155 token or some new contract is deployed as the 1155 token. You know, the four options being both pods and deposits are combined into a single 1155 implementation under that Bienstock contract. This one is probably going to be the most complex and convoluted, but is going to be the cheapest from a gas perspective. And then there's the option of leaving beanstalk as either pods or deposits as an 1155 and having the other one be its own contract. So those are options two and three. And then there's option four in which both plots and deposits are their own. 1155 contract. And I guess to kind of maybe help, you know, clarify what the difference is. You know, currently right now fertilizer is its own 1155 contract to transfer, fertilizer to fetch or balance. You call the fertilizer contract when fertilizers minted or transferred the fertilizer contract amidst the event being stock. You know, every contract is capable of implementing its I.R.S. 20 each I.R.S. standard. What's so beanstalk as a contract could be upgraded to be an I.R.S. 1155 token. It could be upgraded to be an IRS E20 token. It could be upgraded to be NERC of 55 token. It could upgraded to be an instance of all three of that. Now, what that looks like for Beanstalk to be two different ERC 20 tokens, you know, two different ERC 1155 tokens is kind of the same as if it is to be one, you know, kind of it has kind of a domain space of two to the 256 potential IDs that can be used in so long as pods and deposits are implemented in a way such that there can be assumed to be no collisions, then, you know, then there are some kind of graceful way that, you know, the beanstalk contracts can be built. What this would look like is, you know, probably in the transfer function, you know, like in the transfer function, they'll probably need to be some Boolean flag that specifies, is this a part a plot or a deposit transfer? And based on that flag that gets encoded into like the arbitrary data parameter, um, you know, it, you know, kind of, you know, kind of you call the transfer from function, the first thing it does is decode the call data to determine if it's a dip plot or a deposit transfer by just decoding the first byte or whatever or the first bit. And then, you know, kind of either going to the transfer plot function of the transfer deposit function based on that byte from a gas cost perspective, you know, there are several implications in both cases or design decisions that need to be made. And, you know, when when you know, if they were to be moved into a separate contract and the biggest of which is just that the state of deposits in plots is currently stored within Bienstock. So to move plots to some external separate contract, you know, say that decision is made to have a separate contract be the plot. 1155 contract or part you know then the decision comes should all plots going forward be stored in the state of that contract or should they still be stored in the state of bienstock? And when you call the balance of function on the plot contract, it just calls the you know, the plot function, the get plot function from bienstock. So it just becomes a wrapper contract in the sense that, you know, it it omits the 1155 transfer in mint and burn events, but the state is stored in Bienstock. You know, the alternative being the state is actually stored in the plot contract for all new plots minted going forward and are some like option to migrate existing plots into the 1155 structure in potentially some option overall. So you can have like some option to have plots that are not 1150 fives or plots that are the exact same case is true for deposits, you know, kind of all deposits are currently stored within bienstock. So if the decision to make deposits a separate 1155 contract is made, then you know the consideration about whether to store that state in Bienstock or in that 1155 contract going forward will need to be made, You know, definitely. And kind of thinking about the gas considerations of of, you know, kind of the different options here, the gas cost of calling a separate contract is, you know, the first time is 2600 days. So kind of in moving deposits in place to a separate contract as at 1155 token, you know, one could expect the gas cost to rise by at least 2600 probably, you know, something, you know, probably add an extra thousand gas of general overhead now. And if the state is kept in bienstock, when you call the transfer from function, the transfer from function is going to have to call bienstock. So there's also going to have to be an increased gas cost of 2600 on transfer. If state is capped within Bienstock. Now if the state is migrated to the separate plots contract, you know, meaning a separate parts contract is used for an 1155, the state is moving there. Then on transfer the plot contract doesn't need to call bienstock at all so they can kind of have a cheap transfer. But on mint it'll still need that cross contract call probably. For simplicity's sake, it makes sense to leave, you know, all state where it is and not move the state. You know, things. The extra gas cost, if the decision is made to use separate contracts is, you know, probably worth it of, you know, the fact that I call it the bienstock will be necessary in the case of a transfer. But, you know, just trying to first help define the the kind of decision space on, you know, what the possible options are when discussing the, you know, this you know, the details of potential implementations here. Got it. So you feel like these decisions should be made at the same time, basically, or they kind of need to be, you know, kind of because when you know, so so let's say, you know, a decision is made, you know, namely the one that seems to be more urgent is on the deposit side. As you know, it would be great to kind of group those changes with the changes that pizza man is working on. You know, you know, if if the decision to you know at the point where the you know so the first question is then is this a new contract as in 1155 or is it going to you know, is Beanstalk becoming an 1155? You know, the second question is, if it's a separate contract, is the state going to be moved in 1155 or Captain Bienstock, you know, again, on this end, probably makes sense to leave the state in Bienstock just for complexity sake. You know, there's a lot of they intertwined in specifically around deposits and being stuck on the plot side. You know, the optionality or you know, or the question is probably a little bit more open to where it's like, yeah, the plots you know, state is, is much, much more simple in the sense that there's never been a migration. They're not wrapped up in this stock in seed stuff. You know, when you transfer deposit you're also transferring stock in seeds. You know the decision there might you know it might make sense to move state to a separate contract. But, you know, just in general, you know, any sort of migration, you know, migrations are probably the most risky from a security perspective and, you know, require the most amount of attention to detail and testing. As you know, migrating every single state is, you know, quite risky. So, yeah, that answers my question. So maybe to prove this earlier point, we can move on to deposits unless anyone has any other questions going to go ahead and move forward. But feel free to just cut me off or, you know, if if there are any more issues to address on that topic. So with deposits, the first thing that's important to do is define the domain space of, you know, all of the unique identifiers for any type of deposit. Now, currently within Bienstock, deposits are indexed based on two different variables, the first being token ID or token address and the second being season. You know, in this migration to the cumulative, you know, to the variable seeds per BTV change, the index will change from a season to some, you know, cumulative accrued grown stock per BTV index. But you know, the general idea is that, you know, there is some, you know, address, some token address which specifies what type of deposit it is, and then there's some, you know, index or, you know, which defines, you know, how much grown stock has accrued on top of the deposit. Now, one thing that's important to note is, you know, any implementation, you know, any any implementation that deposits uses should be future proofed. You know, kind of it's it's kind of an inevitability that Bienstock will at some point need to support 1155 and 720 ones in the silo as whitelisted tokens. And let's take the case of, you know, uniswap V3 being whitelisted into the silo and or some wells v2 that has, you know, some sort of ranged liquidity, maybe one sided on chain orders, maybe stop losses, whatever it be. You know, in the case where there exists in a amend, where there is more than one type of order, which is, you know, every amp, but the uniswap v2 is an associated forks and versions. The constant function names, you know, you know, if there's more than one way in order can be represented, then the ERC 20 token immediately becomes an insufficient value standard to represent the liquidity position. You know, this is ultimately what led units throughout the three to mint range liquidity positions as NFT is. And you know, trader Joe V2 to implement them as 1150 fives now with this extension to 1150 fives you know the domain space deposits grows quite substantially from being just a token address which is 20 bytes plus some index which currently is 16 bytes but could be compressed if necessary. And why that's relevant is it could be compressed to 12 and 12 plus 20 is 32. And you know, the domain space up in 1155 is 32 bytes. So you know, you know, kind of if you know, if the domain space only is limited to that of ERC 20 tokens, we could just compress that and make that the token ID the address concatenated with this, you know, kind of the growing stock index. However, you know, given the goal should be to future proof things for instances where 1155 is an NFT is are included, you know kind of discuss the domain space of an 1155 and an NFT is 32 bytes. So even if you know the grown stock index is compressed from 16 bytes to 12, you know, we're still dealing with a minimum domain space of 64 bytes as we have the token address plus the grown stock index plus the token ID. Now, obviously, the 1155 standard asset. Now we're trying to so now we have 62 domain space of 62 bytes for the unique identifier for a deposit. And the goal is to squeeze that into 32 bytes, which is the limit of the NFT or the 1155 standard. Now, you know, this is where we get into the discussion of, you know, creating some value standard that has a very able length domain space or an infinite length domain space is one that becomes particularly interesting as there is in this requirement to try to compress the domain space. You know, however, fortunately, cryptography gives us a very easy way to compress domain space and hashing, you know, by performing, you know, kind of common hash standard hash algorithms. You know, in the case of the EVM, generally, CEC is used can compress arbitrary length data into fixed length. So the you know, given that there's no way to compress the domain space, you know, naturally into 32 bytes, you know, the next solution is to hash it. So now let's say we take the token address concatenated with the grown stock ID, concatenated with the token ID, and we just hash that resulting in the domain space of 32 bytes. Great. Fantastic. Now B, you know, now we're in the domain space of 32 bytes, the tolerable range for an 1155. And you know, great, fantastic. Now the difficulty with hashing is hashing is kind of a one way operation. When you hash when you have a hash, there's no way to determine what the inputs into the hash are. You know, kind of it's one way in the sense that, you know, you hash the being addressed concatenated by some idea, let's say a million, you're now given some random string of digits. And the beauty of hashing is, you know, the assumption of collision resistance is made, which is what allows us to kind of perform a, you know, kind of store, you know, a larger set of data as a smaller set of data. But, you know, kind of the problem becomes when I call balance of and if we look at this or kind of I mean, I guess it's like, you know, so ultimately the question becomes, you know, how do we how do we maintain in the relation from, you know, this this 1155 ID, which is this hash to the actual metadata of the token? And when I say metadata here, I'm referring to the 1155 metadata, which is if, you know, deposits are implemented as 1155, the metadata is the token ID, the Crown Stock index and the token address, the token address, the growing stock index and the token ID is the order we've been using. So now the question becomes, in the case of the balance, you know, how do you implement a balance of and a transfer function on top of this, given that specifically? You know, for balance of you want to like the requirement is to query by the hash token ID, you know, kind of implementing that is simpler as you know, switching from an indexing system where we indexed from token ID to grown stock index to amount and BTV, just changing that to be an index from, you know, hashed ID to a mountain BTV. Well allow the balance of function to determine what the amount the user has deposited is on the transfer side. And I guess, you know, maybe to take a step back and it's like, okay, so again, the goal is to be able to somehow define this relation from hashed ID to metadata, specifically in the case of the the transfer from function as the transfer from function, you're transfer from with an ID that is hashed. And now this transfer from function needs to know how much stock in seeds to remove accordingly and how much to decrement your balance of BTV by now, the simplest way is to store that on chain and you know, every time some new ideas hashed or minted, there are some storage variable that set in some mapping, you know, from token ID to token address growing stock season and this optional token ID and now that's all fine and dandy, but the cost of on chain storage is quite high. You know, each individual slot is, you know, 32 bytes and cost, you know, around 21, 22 K Yes. To set. And so, you know, kind of this means that every time a new deposit occurs, we're looking at least an increased gas cost of 20 k gas to set that metadata because remember the metadata for or C 20 tokens, let's say it can be compressed in 32 bits bytes. And for NFT, is it 64 bytes? So in the case of each new unique, you know, your C 20 deposit is going to increase gas costs by 22 K and each new NFT is going to increase gas costs by double that as it in 64 bytes. Now, this is kind of the base case right. So, you know, and it's one option. So kind of, you know, option one here is, you know, store the metadata on chain. Fantastic. Increasing The gas cost of probably every deposit. Now, the alternative is to not store the metadata on chain. The metadata is already admitted through an event in the ad deposit event. So by kind of linking the ad deposit event to the transfer mint event, a mint event is just the transfer event where the from addresses to zero event address, you know, by performing some off chain linking of token ID to two metadata. By passing events, we can define the relation from metadata to hashed ID and kind of, you know, in the case of the balance of functions, so long as things are indexed on chain by hashed ID, the balance of function should be able to return a proper value. You know, as far as the deposit function goes, they get deposit function, the withdraw function. These functions already take in the metadata and you know, from the metadata, it's very easy to generate the hashed ID, So no functionality needs to change from the perspective of the silo other than know storing in this index of hashed ID instead of token address. So growing stock index to amount or whatever. Now the transfer firm function, you know very similarly to the plot situation can leverage the generic, you know, kind of or the the abstract called data bytes, you know data that can be input into the safe transfer function, you know, kind of that place where start and end we're going to be specified for the plot implementation and the deposit implementation, you know, a deposit 1155 contract can mandate that the metadata is actually encoded in that data and it can verify that metadata is correct by first decoding that data into token address grown stock index and optionally token ID, and then hashing that verifying that matches the ID the user put in and then proceeding with the transfer. So if you know kind of this, you know, highly custom implementation is used for deposits that, you know, kind of would break native compatibility with other, you know, uses that don't kind of support custom 1150 fives that take advantage of this data field. And you know, there is a way to kind of implement deposits 1150 fives without requiring any sort of metadata to be stored on chain. So you know, kind of within the 1155 space. You know, first question is, you know, kind of does this caching the metadata and using the hash as the idea of the token, does that seem sound? Secondly, you know, should you know, should it be a new contract, the deposit. 1155 or should it be bienstock? And then the third question is, should the metadata be on store? You know, should the metadata be stored on chain increasing the gas cost for all depositors, or should the matter should it be assumed that the metadata is computed off chain using, you know, events and inputs into the safe transfer function as generic call data? Did you see that the hash would be based on the token address, the growing stock index, and then also the token ID? Yes. And the token ID would only be for 1155 and 720 ones. Currently there's no support for 1155 and 720 ones in the silo. So this would just be some sort of future proofing, you know, kind of to when 1155 and 720 ones are added to the silo. You know, there already is some specification for how they can be incorporated into the existing, you know, token standard. So if we each deposited ten being at the same time in the same season, wouldn't that give us the same hash. Yes So yeah. In the instance where two people. Yes exactly. But that's, you know there's, that's, that's you know exactly as is the case with the current, you know, deposit function. Right. Of that hash doesn't make my 1155 or whatever it is unique like I'll still get you and I will each get our own 1155 token or whatever it is, it will just have the same hash. Yes. And you know, because it is the same token and the same run stock index, they are the same in nature, right? Yeah. Cool. But it. So are you thinking that this change should be should happen now before the growing stock index a modification is deployed. So the the big change to the actual implementation is the way deposits are indexed. You know the so I mean looking at this 1155 standard need to pull it up again real quick. As far as I'm aware you know the balance of function does not support that same you know, generic bytes you know, doesn't have that space to include the generic bytes. So, you know, what's important is that by calling balance of we just the hash I.D. bienstock is able to determine how much of the token a user has. So what this means is, you know, currently in the silo, deposits are stored in this mapping from token address to grown stock index to deposit right. What this change would require is that that indexing system changes to just be hashed ID to deposit such that the balance of function by only knowing the ID is able to retrieve the deposit. That makes sense. Yeah. Yeah. So. But you're still thinking that this change should happen before any kind of segwit v3 migration so that it's. Well, migration. Yeah. I mean, it's like, you know this in the silo V3 we're already changing the indexing system and adding legacy support for the previous indexing system and to later on change. The indexing system again is only going to add another layer of, you know, legacy support. So if the indexing system is already changing it, it's probably a good idea to change. Now to at least the new indexing system such that such that, you know, there isn't the requirement to move to new indexing system again in the future. And and, you know, kind of changing the indexing system is just every time, you know, a deposit is accessed or stored in storage, it takes the hash of the token address, grown stock ID and stores it at that cache instead of storing it directly in that mapping. Right? Yeah. I mean, from your perspective, does that seem like a, you know, a reasonable thing to expect? I mean, does it seem like, you know, a pretty significant change or do you think it would be pretty easy to kind of sub out that indexing system? Well, yeah, I would have to think through it a bit more. Just trying to think what other downstream effects it might have. Like, you know, now in the UI, you can see what seasons you deposited, in which in theory we should be able to still get that data based on the events that are minted. But anyway, now it's going to be changing to the grain stock index at which you deposited. Just the only problem is from this hashed idea, we're going to have to make sure that you can always get out the three inputs like the address, the growing stock index and the token ID. So does that mean that it's always going to rely on the UI to to input those. So for example, let's say I wanted to do a convert for the convert. I would need to know, know the address of the token of my deposit, the index of which was deposited, and then also the token idea. Yes. So the UI is going to have to input those every single time you want to do kind of any interaction with the silo, which should be fine as long as that data is available through events that were committed totally. And you know, that's a great point piece, man. Thank you for bringing that up. You know, I mean, but isn't that how Bienstock currently works, right? When you deposit, you give it the token address and the growing stock index? You know, from from my perspective with this change, nothing about the interface or any of the existing function signatures, have they change at all? This is only for two, you know, a couple of new functions, namely the balance of and the corresponding balance of batch safe transfer from in say, transfer from batch. And you know, the existing function doesn't even need to know what that indexed. You know, what the hashed idea is. It can still take in the token address and the growing stock index. Well yeah so basically any interaction with the silo is going to have to perform this hash every single time to make sure that the input is indeed valid. Is that a significant gas cost? Because there is a pretty cheap to do that hash. So hashing with SHA I think is actually, you know, cheaper than you might think. You know, I'm quickly pulling it up, but just for your reference, every time you use the mapping, it hashes the ID and it hashes the storage slot, concatenated the stored slot that that mapping exists in concatenated with the idea that is input. So like, you know, the mapping is already performing a quick operation and may be too because it is too deep in the mapping. But just quickly, pulling up the Etherium yellow paper here, so I'm not wrong, but a cost of a operation is 30 for the base plus six for each byte. So, you know, say there's 32 bytes, you know, for the NERC 20 token deposit, we're looking at a gas costs around, you know, 220, which, you know, is is is not substantially high. And you know, it's already you know, already happens when using a mapping. So, you know, from my perspective, I think the gas cost here is quite marginal. So We're just trying to look back to the code and think this through. Does this also mean that every single time you mow, you have to put in this info because now we're not storing I guess this on the case because as long as we keep those statuses, then we know the last time that a user moved. And so we just need to do the difference between the the current year and stock index and the previous one to which they moved. So yeah, I guess that will work out just fine to assume, you know, there's, there's also I mean, it's just worth bringing up the point in which, you know, we could go with the King Nation strategy without the hash for just 20 tokens and then when 1155 and 721 tokens are added, those have hashed IDs. And but think by doing that, it you know, it's you know Bienstock is now operating in some you know, in some space where it has two different token ID systems depending on what type of token deposit corresponds to which, you know, might get quite confusing and feel a little janky, I guess. You know, anyone have thoughts on on that or, you know, does it feel like it makes sense to use kind of the same hashing system for all types of deposits? So what would you take the hash of for a ODS deposit or so? Yeah. I mean, if it's a you're like for the ERC 20 deposits, which are currently the only ones supported, you know, say we shrink the grown stock index to like a UN 96 such that it's only 12 bytes, you know, a token address is 20 bytes. So what this means is that you know, the token ID can now be the concatenation of the 20 bytes of the address and the 12 bytes of the grown stock index and still be 32 bytes long. So, you know, kind of that could just be the idea instead of having to go ahead and hash that Now when introducing other values standard 721 1155 into the context of depositing that into the silo now the domain space extends to 64 bytes because there is now also a token I.D associated with that deposit. You know, you deposit, you know, NFT with this address at this growing stock index with this ID, right? Yeah. The only clarification would be that the Greenstock index is an INT instead of unsigned. And so I guess that shortens the total amount of current stock it can be, but it should still be enough room to account for future bizarre growth. Totally. Good point. I'm yeah I mean like from from I think like some of the preliminary discussions around this, it seem like the general consensus was to use use a single you know use the hashing methodology so that it's future proof. And you know, personally I think the implications of doing so are quite minimal. I stated like it seems like on the on the contract side, it's only a couple hundred just to perform the hash operation from the metadata side, I guess is where it's going to get a little bit more tricky where I mean, you know, to some degree, you know, to some degree, imagine that like for instance, something like because because also, you know, with the 1155 standard comes this notion of like, you know, being able to actually query the metadata and that metadata. And I guess it introduces the rule. You know, it brings up the question of should that metadata be on chain such that, you know, when open see, you know, there's like this function this you are a function can I pull the standard back up real quick here But you know kind of there's this you are a function that get given a token ID Let me quickly find it. Yeah. So the you are I yeah the you this you are a function which basically like for instance this you so this you are I function returns some sort of string which is some sort of you know you you are I that can you know where when it when accessed returns some sort of JSON which specifies the metadata for the token ID and when you go on opensea you know say you go to Opensea you bring you know the BNF t Genesis collection, you know, you click on one, you see they have all these attributes. You know, this one that I just pulled up has a background light yellow body, coffee, Halo Reef. All of these attributes are returned by the user. I function on the NFT contract. So the NFT contract or this you or I function is how clients and you know, user interfaces access the metadata related to this item. In the case of deposits, you know, kind of token address growing stock index token I.D. should probably be attributes of this metadata. Now, you know, recently, you know, as the NFT, you know, kind of ecosystem has developed, you know, there's now numerous ways to actually store metadata on chain. And when thinking from the perspective of the goal is to maximize the centralization here, you know, in on chain metadata solution for 1155 deposits should be considered and you know, kind of when the Opensea contract and I guess there are kind of two options here, like option one is like there's some sort of index or that, you know, indexes every new mint token event in every new deposit event and, you know, kind of stores them, you know, you know, before, you know, listens to them, generates the metadata, stores it all in the ipfs somewhere and somehow updates the you are in the bienstock contract to correspond to that metadata that's stored on some decentralized file store. And you know, this is less than ideal from a decentralization perspective. You know, however, kind of how important it is to have good censorship resistance around the metadata is unclear, as the metadata can always be, you know, calculated from the actual on chain events. And it should quite trivial to verify that metadata is correct. You know, with with the replant, it should noted that, you know, kind of the decision to use kind of arcane metadata for fertilizer was made and you know kind of one you know, downstream implication of that is that, you know, the metadata is not stored on chain. And, you know, I think there is one issue that was or one, you know, kind of RF that was submitted to move fertilizer metadata on chain, which, you know, definitely is a good direction to head in for bienstock. However, with deposits, it's a little bit different of a situation where, you know, kind of storing metadata on chain increases, the gas cost substantially for all users. So, you know, is that substantial increase in gas cost worth that kind of additional e additional ease of being able to fetch the corresponding metadata without having to do event parsing, you know, yourself. So, you know, this is a fairly big consideration well where if if the King cat animation method is used at least just for year C 20 tokens, it kind of kicks the can down the road a little in a sense that, you know, that that decision doesn't necessarily need to be made until we get to the point where 1155 and seven, 20 ones are whitelisted into the silo, you know, by which case, you know, gas fees could be lower, you know, bienstock, you know, who knows, might even be migrated to some, you know, Z roll up with lower gas costs. You know, we might be dealing in a world where at that point, you know, some some network native factors have changed, you know, that you know make it so 20,000 gas cost is pretty you know non-significant so, you know, what that means is like if the if it does feel like an important first principle to have this this you know, to have all metadata stored on chain for, all asset types, you know, it might make sense to use the King Nation method for ERC 20 tokens and you know, only use the hashing method for 1155 and 721 tokens. You know, I guess I'd be curious for the perspective, you know, for some people's thoughts on whether having to, you know, kind of whether using the concatenation method for ERC 20 tokens and using the hashing method for 721 1155 tokens, assuming it doesn't break the standard, which it doesn't seem to from a developer experience, does that feel a little bit over the top or ridiculous? So is the assumption that when a deposit happens, a corresponding event will be emitted with the hash corresponding to that deposit. That also includes the inputs the resulted in that hash. So, you know, looking at the 1155 standard here and you know, there is this again going to have to quickly find it. So the event they have this transfer signal event and so every time a new deposit is minted, it's required that this transfer single event is admitted, you know, going to drop it in barnyard shot real quick. As you can see, the transfer single event has no space for arbitrary bytes. So the chance for a single event every time. So you deposit beans into the silo. You know, as is currently the case, this ad deposit event is emitted which specifies the token, the grown stock index, the amount and the user who deposited it with this. You know, with the implementation of 1155, there will probably be some additional event that's also emitted in the form of this transfer single event from the zero address to the user's wallet with the ID, which is either the concatenated idea or the hashed ID in the value. So there's no way. It doesn't seem like, you know, maybe it might make sense to add the ID to the deposit event. But the idea is that yeah, there will be some, you know, it will either be done across two deposits or maybe or, you know, maybe just one deposit. But, you know, every time a deposit is made, you know, there will be some event that's omitted which, you know, can be used to define the mapping between token address, you know, kind of growing stock index and the rule. So worst case scenario, later on down the road, if you only have the hash, all you have to do is pass the blockchain for immediate events that match up with that hash. And you can find all the inputs pretty quickly. Exactly. So in that case, it seems like that's fairly decentralized and you wouldn't necessarily want to waste the extra gas costs to include all of that for every single deposit on chain. Yes, you know, piecemeal. And I think that's a great point. And yeah, I agree that, you know, from my perspective, that seems be appropriate. But there is an argument to be made that, you know, if there is some function on chain that's a part of the Bienstock contract that you know, kind of returns address, you know, to some like off chain data store, you know, not that that might be a problem from a decentralization perspective, but you know, you know, hard to know. I mean, generally, you know, I'm in agreement, you know, that so long as that can be done via some on chain method, that is sufficient enough. But, you know, I'd be curious to know if there's any people who disagree with that general idea. You just restated the tradeoff at the margin perspective. And, you know, and nation method for I.R.S. 2010 past method for other token types appropriate if that allows us to store the metadata on chain, at least in the short term, as you know, kind of give it not the token ID is a concatenation token address and broken stock index. It can be you know, it can be, you know, from the ID, one can determine what the address is. And the growing stock index is by just doing the decontamination of the of the ID, But in particular, you raised that there may be a problem from a centralization perspective around having or not having the ability to have a function within stopped call something that is on chain and return that data. Is that correct? Not required for you know, for, you know, the feed. I don't know if I can't hear you or I'm having a hard time hearing you. I don't know if that's on your end or my end, but I want to make sure that this is recorded if it's on your end and other people there. Yeah. Can can hear it now. Still can you name. Hello. Hello. Yeah, now I can hear you. All right. Apologies for that. So kind of, you know, the open question is whether the the on chain function that returns the metadata, you know, is is properly censorship resistant. And in the case where concatenation is used, then it can be on chain for free. In the case where hashing is used, then it's going to cost 23,000. I posit store on and the end point or only 20,000 gas for first of each token type. If. Okay understood. But just you know again now it's a little hard to hear you although I think we were able to hear in that. So the the the question is whether that's an acceptable gas margin. And also, is there an open question as to whether that's a required gas cost given the possibility of concatenation? Well, it's just a the yeah. So there are multiple questions here. Number one is, you know, is it important to have the actual metadata returned, the contract, the censorship resistant given you can always query it through, you know, indexing past events going now. But your question is if the answer to that is no, is 20,000 gas A you know, as if is is 27 20,000 gas a reasonable amount to impose on the user to make that possible? And if the answer to that is no or, you know, some general alternative is is it more, you know, is it a better solution to use concatenation for NRC 20 tokens now so that that 20,000 gas doesn't have to be imposed in order for to me, first possible. How much is 20,000 gas at the moment in dollar terms? Anyone have a calculator on hand. You know, from I think it's normally something like $0.50 to $2. But, you know, I don't know right now. And maybe one other thing to consider is if the inputs for any particular deposit are not immediately through like a solidity function call on chain, then does that limit what kind of protocols can be built on top of this? Or is it safe to assume that they that any of those protocols to build on top of this can just have a UI that, you know, passes the events emitted on chain when they were initially deposited and then can verify that the inputs match a particular hash and they can still interact with data from there? Or is it a requirement that they should be able to read this, the inputs for a deposit on chain? That's a really good point, pizza man. You know, personally, I would have to about it a little more in curious for, you know, anyone else's thoughts here. But my guess would be that you know they can index the events off chain, but that's going to be a lot more you know work in difficult from from a developer perspective you know to to do that. You know they can also just assume that the that the non censorship resistant metadata return from the chain is back then maybe that makes it easier. But you are correct in the fact that you know there will be no function to say given this ID give me the token ID and corresponding grown stock per index with tools like the graph. Is it possible to make the link simple API endpoint that would take in an ID and return the inputs? Yes. I'm not sure it would. Yeah. I mean through through GraphQL. There's definitely a way to do that when you're just talking about taking in like the hash idea and getting out what, what was used to to make that hash because we essentially do that on a port orders today. Well, okay, awesome. Then it sounds like it's pretty doable then. All right. So we have like 15 minutes left both for this call. Do we want to talk about like, what is the the the path forward and figuring out the answers to these questions and over what time horizon we need to figure this out in? Yeah. I mean, you know, for my and you I would appreciate some more thoughts you know from just you know, the community at large around, you know, what they feel like is appropriate given the the all of the considerations that were mentioned on this call about the different solutions. From my perspective, you know, the most urgent of which is, you know, just to please decide whether to change the current indexing method to be a hash. You know, if if the decision is made to use concatenation for ERC 20 tokens, the switch to the hash structure, it becomes optional. But, you know, switching to the hash structure doesn't break anything. If even if, you know, the decision to use concatenation is kept. So perhaps it might make sense to just switch to the structure, you know, at the implementation level and, you know, then spend some more time reasoning out, you know, kind of, you know, hashing out the different, you know, details of the implementation. I mean, you know, some sort of write up specifying the different pros and cons and alternatives to the different solutions might be helpful to get more community feedback, you know, but that seems to be it. My thoughts I mean, I appreciate the difference, if that's possible. Well, so from a timing perspective, could you restate amongst all the open questions that we talked about today, which ones would be like Impact Petermann work in the near term? In the very near term, yeah. I mean deferred to the man. From my perspective, changing the indexing structure shouldn't have too many ramifications you know but but obviously there's potential issues. You know, kind of what that change would entail is every time a deposit is access, you know to to hash the inputs in, fetch the value, add the hash, and every time it's set to hash the inputs and, you know, add to values that the hash. But, you know, Pete, the man's the one who's been hands on with the new code. So probably, you know, he has a better idea. You know, there's also a discussion to be had around kind of the events themselves. The events themselves are, you know, kind of like whether there should be some kind of batch migration at the time of deployment or if people should individually migrate their deposits and how to do that. And it's the same fashion, you know, might be important, you know, think adding the full like whereas just changing the indexing system should hopefully be a smaller change that shouldn't be too significant. Adding the full 1155 system is definitely going to be, you know, a significant addition to the developer workload. You know, But but again, you know, it seems like it's inevitable and, you know, is a prerequisite to get some sort of deposit online using C part. So from from this perspective, it might make sense to just move forward with the full CRC. 1155 implementation for deposits as a part of this upgrade. You know, we haven't had a chance to discuss like how migration would occur and you know, thinking fortunately that's kind of downstream of finalizing the implementation. So maybe that's a conversation for a later date. But, but you know, kind of migrating to this new siloed system already gives us kind of a clean cut between free upgrades, deposits and post upgrade deposits and gives us a system by which, you know, we can leave the migration of deposits to 1155 from their existing, you know, nonstandard structure to the users to where it's like the next time they use it deposit it will already amends. 1155 and the 1155 balance function is only going to return the amount that's been transferred to the new system. So from an elegance perspective, it definitely, you know, personally would prefer to implement them. At the same time. But, you know, if if there are, you know, isn't it urgency to get the current siloed structure out now? I think at the very minimum, changing the indexing system, you know, will be beneficial. I think the only consideration is that there was talk of wanting to release the since rebalancing before the release of wells. Is that still a consideration? Yes. So I'm not totally sure what all the details are of implementing the 1155 standard. So it's hard for me to say at the moment how much longer that would take me to implement that stuff. At the moment I'm working on the migrate functions to just kind of to get some color. The first time that you interact with. The silo after the grain stock for BTV index change is deployed, you will have to call a special function that will basically migrate your deposits from the old system to the new storage system. And in order to do that, you'll have to provide the season of every single deposit that you have and also the corresponding token that you deposited into the silo and then basically the on chain, it will just go through and add up the bid. If you use all your deposits and verify that the current amount seeds that you have corresponds with the deposits that you input and then it will remove your seeds and move all of your deposits over to the new deposit system into At the moment I'm just looking at a couple small rounding issues that I can probably get figured out. But in terms of, you know, what are the other details for implementing 1155 it's not totally clear for me. So it's hard for me to give a timeline on how long that might take. Yeah, that's a great point, pizza man. You know what the other details are? You know, if you skim through the 1155 contract, you know, the main functions are, you know, kind of balance of the the transfer function, you know, kind of one of one of and then some sort of approval system and then, you know, a batch version of both, you know, the balance of function just needs to take in the index and return the amount that the index, the transfer from function is mostly going to wrap the existing transfer function transfer deposit function. Just have this kind of decoding. You know, we'll need to implement the approval system. The current approval system does not, you know, adhere to the approval system defined in the 1155 specification and then some, you know, kind of the the yuri function will need to be implemented. So it is a fairly substantive amount of work. And you know, but but I mean, like one of the main benefits are like this migration function would be the perfect instance for which to just perform some batch mint event, you know, which would just get everything up to par with the current system. So like, you know, it seems like some sort of silo migration would be necessary regardless. You know, some sort of mint migrate function will be necessary regardless when making the change to the 1150 fives. And so it's like, you know, personally and just would prefer not to have to do that twice, like just from, you know, a complexity perspective, especially if, you know, it seems like it makes sense to move to the 1155 standard sooner rather than later to get that C part supports. But you know, I mean, you know, it might make sense to just, you know, create some larger documentation around, you know, what's actually required for the standard and, you know, have that discussion on some follow up call. So also, if 1155 is implemented for deposits, then at the same time, Bienstock has to decide whether or not it will support pods for 1155 as well. Yeah, I'm not quite sure what you mean. Oh you said as well as a well, yeah. Yeah. Well I like, well I mean I personally don't think that decision needs to be made. At the same time, I think it can be made at a later time, you know the, the, the like if the, you know, if plods, if plots are going to be implemented as a separate contract as 1155 there's overlap at all between the two. If pods are going to be if they're both going to be implemented as an 1155 within Bienstock, you know, then it's just going to require updating the transfer from function to include the plot, the plot transfer logic as well. And same for the balance up. You know, it probably makes sense to implement them as separate the 1155 contract and you know personally leaning towards doing plots as a separate contract and doing beanstalk as deposits. Sounds good to me. Great So you know given given that they're probably going to be different contracts, they can be done at different times. Awesome. Awesome. Well, you know, think think you know, there are a few I mean part beanstalk that they have to be done at the same time. No, they do not. I'm you know just kind of I guess there are no they do not have to be done at the same time. Is there any condition in which they have to be done at the same time, if it's the if it's a different contract now, so then they just can be considered independent? Yeah. So the only thing with a tight timeline is the deposit. Yes. And the tighter time line is probably on the indexing system. But yeah, but it would be great to do it all as one, including the plots, not the plots. So what is what does all this one mean? I don't want to. What at all? In this case? Yeah. Apologies. Meaning the whole transformation of deposits into the 1155 standard, whereas was discussed really? You know, it would be great to at least use the, the hashed indexing system, you know, kind of like at least doing that now just so that there doesn't need to be a transition to a new indexing system for deposits internally as as it relates to the storage, like at the very minimum, if we change the indexing system to be harsh and then maybe save the deposit ERC 20 token implementation for later, you know, that is an option as well. So I think at a minimum, you know, migrating the indexing system at a maximum, doing the whole migration to deposit 1155 And so PetSmart, do you have a sense of how, how much time that specific piece would take, the part published just described about changing the indexing system from the entire migration to 1155? Well, I can't say exactly, but it seems it seems very doable. I feel like the scope of that is pretty clearly defined. I don't know what might you know, there's always something that pops up when you get in the guts of the code and you start changing stuff. You're like, Oh, shoot, now how am I going to, you know, make this function work? But I can, you know, keep you guys updated or or, you know, I can tag team it with someone to do it for someone else who would like to help out. Great. I just wanted to see if that was the question you were answering earlier or same one. Which question was that? Just not right. All good. I feel like I got all my questions answered on that front, Just that it's not not totally clear on timelines for both the entire migration to an 1155 standard and just the the minimum work required to change the indexing system to a half. Oh, yeah, I guess that's a good question. To privileges. Do you think that we can deploy the the hash system and then tack on 1155 later without requiring another migration? Yeah, definitely think that is possible. Probably will just be some additional complication sort of evolving. You know how to handle the migration of existing posits to 1155 tokens, you know but you know that can always be handled later. So the overall workload will definitely be higher if it's done separately. But, you know, it is an option to just do the indexing system now and, you know, hold off on the for implementation for later. Okay. But it sounds like overall during the fall of 95 now is the way to go by perspective. That makes sense. But you know, if if you know the community in the Dallas Large feel like there you know is a little impetus to get moving on this and you know, maybe adding an extra, you know, 1 to 2 weeks of work, you know, probably isn't best, then, you know, it might make sense to not do the whole system. But from my perspective, doing the whole deposits at 1155 now, you know, would be would be a great milestone to be able to get done. And like, you know, there is going to be significant overhead with doing it in two separate, you know, two separate upgrades also are running over the 2 hours set aside for the call. But perhaps, yeah, some more discussion is in order to figure out what the timeline would be extended by in both cases or and then neither I suppose, although that seems like less of an option at this point. So yeah, feel free to, to interrupt me if you think we should keep going on a particular point in this session. But otherwise I think I think we should probably probably call it there all written. Oh, thanks everyone, for coming.