Beanstalk Dev Call #6

February 21, 2023

0:00 Intro • 1:04 Parameters of The Pump • 3:53 Intro to Pump Oracles • 5:04 Equal Weighting of Values in SMA • 6:23 Multi-Block MEV Problem • 26:33 Uses of Oracles • 36:03 Frequency of Multi-Block Assignments • 45:29 Performance of Oracle Parameters • 1:34:36 Minting Based on SMA vs EMA • 1:41:05 Need For Written Explanation of This Content

Dev Call


Meeting Notes (WIP)

The Pump

  • Parameters we can tune
    • How far should the lookback be? How far is the EMA averaging over?
    • Gamma parameter - % increase or decrease permitted in each balance per block
  • We can use a Geometric EMA, Arithmetic EMA, Geometric SMA and/or Arithmetic SMA
  • SMA weights historical values the same. The value leaving the SMA could be more significant than the entering one. For that reason EMA is often preferable.
  • Arithmatic and geometric are similar, but in the case of multi-block MEV they behave differently (geometric might be more resistant)

What does it look like for somebody to make a multi-block MEV attack?

  • Nation-state attackers are a concern
  • An attacker adding liquidity to the pool could cause an excessive amount of Bean to be minted, potentially destabilizing Beanstalk.
  • An attacker could also cause excess Stalk to be issued through the manipulation of multiple pools.

Other considerations

  • Arbitrarily capping deltaB creates an inefficiency.
  • Pumps can be customized depending on the use case of the well


All right, Why don't we go ahead and kick things off? So as I understand it, there are a couple of different things that we wanted to discuss and get to today. One of which would be what parameters or configurations to set on the pump that Bienstock would use as an oracle, which would require a BIP to change post beneath while deployment, as well as some of the trade offs of making Bienstock native assets like pods conform to existing I.R.S. standards like 1155 versus, you know, the friction involved with proposing our own. So with that said, maybe publicity, I think it'd be helpful if you can set the scene to some extent and talk about what questions that you're hoping to get answered. In particular. Awesome. Sounds good. Appreciate it, Guy. You know, good morning, everyone. Hope everyone is having a great week on the farm. Awesome. So first off, to talk a little bit about, you know, kind of the pump itself and and, you know, kind of what what the parameters we can tune are. So I'm going to be doing a few screen shows, I think. Yeah. So, you know, kind of the pump comes with a few parameters that can be tweaked. And, you know, the first off is the actual formulas that are being used for the pump. In this case, you know, they're the there's both a EMR and an estimate that can be used. And then the third are these three parameters right here, the first of which is, you know, for the EMR that's currently being used and you know, for the estimate or just any instantaneous value, Oracle, you know, how far should the lookback be in terms of, you know, how far the value is averaging over? And, you know, this could be 30 minutes, it could be an hour, could be 15 minutes, could be even longer. And, you know, that's ultimately kind of one of the decisions that will kind of need to be made in the hope is to, you know, try to try to make that decision today. The second is this gamma parameter, which is, you know, some notion of how far the Oracle is willing to tolerate a balance, move every block. This could be 100%. It could be 50%, it could be 200%, it could be anything. But, you know, today we'll do some some analysis and looking at kind of, you know, how different variables or how different, you know, gamma parameters change, how the oracle behaves. So let me quickly. Awesome. So now let's get into things a little. First off, we want to showcase, you know, basically some some very simple data about how the Oracle performs. Um, well, I guess first off, you know, first off, yeah, that's awesome. So here are some very simple graphs, kind of showcasing some of, you know, some, you know, kind of the main four oracles that are going to be showcased today are, you know, a geometric. M-A And arithmetic. M-A A geometric estimate and an arithmetic estimate. You can see in this chart here this is just, you know, some random distribution, about a thousand with a standard deviation of 50. And you know how these different oracles perform. And the first thing to note here is when dealing with concentrated values, meaning values with, you know, a fairly reasonable standard deviation about a mean, you can see that the arithmetic mean in the gamut geometric mean perform almost about the same. You can see this orange and green line here being the EMA geometric in arithmetic are about the same. This estimate here in the purple and red. You know, also behaves about the same between the arithmetic and the geometric variation. You know, kind of one thing that's very interesting about specifically the same is the estimate weights all historical values the same. And you end up in situations like this where my cursor is where perhaps X goes up, but the estimate actually decreases. And then you can see in this example, you know, the X goes down in the same actually increases. So because the SMA is waiting, all of the historical values are the same. That first historical value, you know, if it is significantly large, well actually, you know, the value that's leaving the estimate, you know, let's say it's a look back of ten the 10th value ago, if it's significantly large compared to the one that's being, you know, entering, which is kind of what's happening here, then the estimate is going to move in the direction opposite to X, which isn't really desirable behavior. So for that reason, generally tend to favor the EMA when it comes to instantaneous values query as it kind of weights the presence slightly more, but we'll get into the specifics a little bit more. Here's just another graph, you know, kind of showcasing some differences between them. So this one is quite interesting. So this one is a, you know, a one block, you know, or a two block MMV manipulation where someone just kind of moves one of the balances in the Oracle to be something arbitrary, large. And interestingly you can see kind of how the arithmetic mean. M's you know, inflate dramatically more than the geometric, the orange and the green down here are the geometric averages and you can see how the relative percent inter increase is dramatically less where, you know, kind of the the EMA or the arithmetic EMA moves, you know, over 450%, you know, kind of veers down here, Oh, I guess this would be 400%, you know, barely move, maybe like five. Here's on the downside where someone, you know, sets a does the opposite, where someone setting a balance to be incredibly low. This would be in the case where someone performs a swap and maybe they add a bunch of you know, they one of the balances increases a lot, the other balance decreases a lot. And you can see that the the geometric actually performs worse when values are decreasing. However, what's important here is the magnitude on how much it increases with the arithmetic is much higher. We're talking hundreds of percent. Where here we're talking something like a decrease of something like 10%. Let's say this is 1.35. So just something to keep in mind is kind of how the EMA in the SMA behave with Oracle manipulation And what we're talking about with Oracle multiplication is a arbitrarily large or arbitrarily small value being kind of a data point in the Oracle when the the malicious actor, you know, performs, you know, has access to multiple sequential blocks. Um, I think that's kind of it for for this side and so kind of I guess the talk yeah it's it's a highlight kind of the problem once again that's kind of a multi block me view which is you know what what and I guess just for reference like going back up here this red line is Uniswap v2 what they use and the green line is Uniswap V3. And so you can see that yes, the Green line using the geometric mean is definitely a lot more manipulation resistant and you know, they have a good paper kind of talking about, you know, how expensive, you know, 20% Oracle manipulation is. But you know, personally, you know, or they don't really do anything looking into kind of the downside, which to me is like a bit of a red flag. As you know, the downside Oracle manipulation is significantly bigger for the geometric mean. So now to to maybe take a step back and look at the higher level picture before we continue, um, you know what is a multi buck movie and you know, it's kind of important to talk about what how much you know, Oracle what percent Oracle manipulation is a protocol willing to tolerate when it comes to, you know you know how tuning what the parameters of an oracle should be. Um so maybe just you know, I know we've talked on multiple like a movie a lot, so maybe just to rephrase it one more time, you know, multiple like movie occurs when a single actor has control over multiple consecutive blocks. BLOCK Proposition on a Etherium on Ethereum. Each epic is kind of the the validators that propose blocks for each epic, which is a set of 32 blocks, is defined ahead of time. And thus, you know, you know, kind of let's say someone has 10% stake. Each one of these slots. They have, you know, about 10% chance of receiving a block proposition. So you can imagine that two consecutive blocks, you know, and again, a very, you know, would be something about, you know, one over ten square, which is about 1%, you know, and you can extend that out to multiple blocks. Obviously, that's not exactly correct. As you know, I, you know, don't believe that, you know, the same validator can can propose multiple blocks in an epic, but that's neither here nor there. So kind of one of the things we need to talk about is first, you know, kind of what does it look like for, you know, someone to attack, to try to manipulate in order to pause for a sec and dive into the scope of the problem a little bit more? Yeah, I mean, to some extent, if you have a single proposer that has alternating blocks or they have one block, then they don't have a second block and then they have a third block. There is like the potential for them to buy the second block and then string together a three block at me. So particularly as markets develop on top of this and the proof of stake system becomes more efficient, I think that this problem extends further. And as we think about what type of resistance Bienstock needs to consider feel like it's it's probably better to think about this in the scope of not just where multi block MF is today, but where it could evolve to until until there's some sort of grand or maybe mitigation. But for now it's like we're probably going to get into a worse situation before it gets better. Yeah, and that's a great point. Probably is. Thank you for bringing that up. You know, I guess to talk about what that would look like, it's my understanding that kind of at the start of the epic, the block proposers are picked and so it would have to be a pretty fast exchange for someone to buy control over the execution flow. Not necessarily You can just pay for like execution of a transaction in the front of a block, right? Yes. But what you need is guaranteed a guarantee that no execution is going to happen against your transaction. I guess if it is the last block in the sequence. Yes. But it's like you're going to have to pay a lot for that. You know, if you had 1,000,000,000,001 sided liquidity to a pool, it's going to be very profitable for someone to sell into that. And, you know, you would have to basically pay pay the cost of the movie in the beginning and you'd have to lock it in at the beginning of the app before you play your cards, so to speak, that. Yes. And I guess it's unclear to me how you could guarantee locking in that kind of transaction at the beginning of an epic. But, you know, definitely is something something to consider. So I guess so I guess you know, to look kind of at, you know, kind of this top stakeholders on the Etherium, you can see that, you know, Ledo has about 30% of the stake split across 30 different entities. You know, I believe they all have about 1%. Or you can see here it seems like they all have about 1.4%. You know, these are different entities that are individually controlled in custody, either Coinbase is the single, single largest, you know, stakeholder at about 11.29%, Kraken at 7.58%. Then Binance, etc., etc.. So, you know, we're really looking at a few parties with more than kind of 1% stake. But you know, kind of going back to what with the largest potential attack on, you know, in Oracle, be would probably be that of an Asian state where you know, some nation state is somehow, you know, kind of taking control of validators existing within its own country. And, you know, then perhaps, you know, taking advantage of a a stablecoin custodian that also presides in their country and, you know, using that to mint kind of an infinite amount of currency that is paired with bienstock. So in the case of, for instance, the US government, perhaps they might take control of Coinbase being a US based, a US based company and you know, then, then they have control of about 11.3% stake and then, you know, maybe they can go to circle and get circle to perform in some kind of flash mint of, you know, infinite usdc that then gets added as one sided liquidity to a being usdc pool, the fee, having the pool having a 0% fee. This is a this is a free manipulation in the sense that in doing so, the only fee they pay is gas and the potential profit they would have, you know, for including other transactions in the blocks. So it's very minimal. And then, you know, they would leave that in there for as long as the multi block MF presides and presumably on the last block they have access to or if they're able to pay the next block provider, as Publius was saying to get the first transaction in the block, they would remove that seemingly kind of infinite liquidity. So over here we can see kind of and I guess one more note on Multi block MEP before we continue is that it seems like it's a general sentiment in the Ethereum community that forecasting who gets to propose what block is generally bad. And you know there have you know, there's been numerous discussions and topics about, you know, a single slot, you know, proposal or elections or some sort of randomness occurring at every single individual block. So there's no time to kind of forecast, you know, whether you know, if multi block MF instance is going to occur. The multiplier committee is only, you know, able to be manipulated because the validator is or, you know, the you know, the the entity is able to know ahead of time how many blocks they have direct control over in a row. If it is purely random, they no longer have a guarantee that they're going to have access to X blocks in a row. And so the next thing to look at is, you know, this chart where we can see kind of list. Could you could you first describe the effect that that sort of attack would have on buying stock? Maybe if it's not obvious to everyone listening, Yeah, you know what this sort of attack would have is, you know, again, in the instance where, you know, they mint seeming seeming, you know, kind of infinite, infinite usdc and then add that one sided liquidity to the pool. And what this the downstream implications of this are, you know, any sort of oracle would register some number of consecutive blocks where, you know, the balance of one, you know, the usdc token in the pool is incredibly high. And, you know, they could also what this would mean is that the price of being in the pool would be incredibly high. And when it comes time for the season, the season would overly, you know, kind of excessively mint a lot of extra beans as even, you know, one block out of the whole hour or two blocks out of the whole hour, you know, 1,000,000% increase is quite substantial. Um, in the case of the depositing in the silo and converting and, you know, don't you know, you know, convert wouldn't actually be affected as it's currently implemented. But, you know, I think there have been some conversations around changing convert from an actual what is the value now to potentially some sort of EMR but that's a discussion for later When it comes to the BTV side it would allow people to deposit at a you know, to deposit their USD CBN LP tokens or I guess it would decrease the BTV substantially in the sense that, you know, now the price of being is really high one, which means the LP token underlying it is extremely low. They could through some kind of multiple arbitrage, make the buying price incredibly low in a given pool. Let's say, you know, they mint a bunch of usdc into the beneath pool. They actually don't just add one sided liquidity. They buy a bunch of beans and then they deposit those beans one side and into the beneath pool. Now, the beneath pool has an incredibly low value for being, and the LP token might be overvalued and therefore it might allow, you know, users to deposit into the silo and receive excess stock temporarily. You know, I think, you know, the bigger concern is the excessive minting, whereas if someone were able to kind of seemingly mint infinite beans, then it would have, you know, be very detrimental for the system overall and kind of become, you know, a race condition to kind of sell into the liquidity. And, you know, it would depending on how many excess beans are minting, if we're talking thousands or, you know, percent, thousands of percent, you know, it could. But, you know, we you know, it could triple ten bucks the beans supply in a single season, which, you know, probably wouldn't be very good. So kind of one of the points that will kind of need to be ultimately decided is what is the level what is the percent manipulation that is tolerable, you know, for being stock? You know, let's say it means 10% extra beans, 20% enter beans, 50% extra beans. You know how many beans, extra beans being minted is tolerable for, you know, kind of And not just tolerable but optimal for bean stock to be able to sustain a significant multi block amoeba attack. Um, I guess, you know, guy does that clear things up? It does. So maybe, maybe then it'd be helpful to just briefly talk about why the existing solution doesn't work in the long term, like the existing solution being that only 1% of Delta B is minted and you know, perhaps that, but perhaps that's obvious. But feel like it's worth worth talking about because you just mentioned this example. It's like, you know, perhaps 10% or something like that. Yeah, that's a good point. Got you know, you know, capping arbitrarily capping Delta B is, you know, it creates an inefficiency. You know, for instance, right now say there was a participant that, you know, was willing to buy. And so back to Peg, you know, this person would be unable to because the soil value is arbitrarily being capped to 1% of the supply, where being might be 900,000 below peg. It's being capped at something like 300,000. And therefore there's this, you know, lack of efficiency and the ability for bienstock to return to the peg instead of the user being able to in a single action, you know, kind of buy. And so to return it back to Peg, they would have to do it over three consecutive seasons and there would be no guarantee that in that second and third season they would be able to even get any of that soil. As you know, their demands might lead to other people's demand and ultimately they may only get one season of soil. On the flip side, it creates a you know, when we're above peg, let's say being stock is an equal amount above peg as it is below peg right now. If it becomes an eerily similar situation to before replan where generalized minting was not implemented you know being was millions of beans above peg. But it was only minting something like, you know, 500,000 beans a season. And it creates this, you know, feedback loop where kind of the fact that it's not minting quick, quick enough actually kind of leads to more people in the short term entering the system because it's kind of arbitrary. It's it's not minting quick enough to return back to Peg, which actually kind of leads towards, you know, more inorganic demand as bean stock is not able to return the price fast enough. Yeah. I mean, I guess I'm curious if anyone else has any thoughts on this to for the units in the chart on the bottom. Correct. Or should it be multiplied by 100 or what's one this chart the number of multiple multiple occurrences per year. These are correct. So if someone so kind of this sounds so high, so high. Right. So you know, someone with you know, for instance, Coinbase, about 10% ownership, they're going to have, you know, between one or two seven block multi like seven block strands of block proposition. You know, very interestingly, Coinbase has already had one. You know, it's been about what, five months since the merge and there hasn't been a confirmed instance where they have control over seven back to back blocks. So yes, this is incredibly high. You know, getting back to kind of the notion of, you know, it is kind of in the ethos of Etherium to move away from, you know, kind of the deterministic block proposition. So, you know, it is probably worth noting that probably sometime in, you know, in the next five years, it can be expected that this, you know, problem is ultimately mitigated and removed. You know, however, it will not probably be in the next year. It'll probably take, you know, two or three, four years for that to actually happen. So what we're looking at is kind of a short term problem and thus, you know, thinking about, you know, kind of mitigating it over over the short term is is really the goal here and not necessarily. Oh, but, you know, that that is to say that it could be it could exist for the next 50 years or, you know, bin stocks, whole lifespan. So it's worth putting into perspective. Hey, yeah, I have a quick question. Do we it seems like like this pump is is sort of being designed specifically to protect Bienstock from attack vectors. Do we expect, too, that other other wells which get deployed, which which maybe are not intended to be whitelisted by the beanstalk, Dow will use this this pump? And if so, could you could you give an example? You know, ultimately it's it's up to the person deploying the Well, you know, there are numerous other protocols that also require some sort of Oracle solution. You know for instance of a compound's the main lending protocols specific today require some sort of Oracle solution. You know, so kind of when a pool is deployed, it will ultimately need to and I guess another thing to note is Bienstock needs USD oracles for everything, too. So, you know, if they're, you know, being stuck. So if there exists the usdc ether Oracle and, you know, Bienstock is willing to accept, let's just say, for now usdc at the price of 1 USD, you know, Bienstock might actually use the usdc pool to determine the price of eve in dollars to which it will need to do for the usdc pool. You know, currently with the sunrise improvements, you know, Bienstock is using uniswap v3 for the the ether price usdc if price and you know, there is, you know, some some parameters similar to the minting cap, you know, kind of set a potential cap on manipulation. But you know, kind of so to answer your question, it like ultimately depends depends at the time of pool deployment, you know, it should be noted that oracles are extremely expensive to add to wealth. I think it's something like 25,000 gas per transaction. Just to update the Oracle, which is like something like 33% of the gas overall, I guess if anyone knows the actual specific numbers, please shout out. But you know, kind of so when a well is deployed, ultimately it'll come down to is there a place for this? Well, as in Oracle, anywhere within Defi. And if the answer to that question is yes, then you know then then it probably makes sense to deploy it with a pump. There could be variations of this pump that don't support, you know, cumulative of time weighted average Oracle queries or, you know, don't support the instantaneous Oracle queries, you know, that make things cheaper. You know, everything is customizable and kind of up to the up to the user deploying the well. But you know, a lot to consider here. So you go ahead just so I understand the expectation is sort of uh, you know, depending on what a certain protocol wants, they'll deploy their own. You know, they might take some inspiration from the design of this specific pump, but they'll probably, you know, maybe customize some parameter space. So what they're looking for is, is that right? Yeah. I mean, the goal here is to, you know, kind of spend time in research creating an out of the box pump that the beanstalk community feels is satisfactory. And, you know, the thought would be that other protocols can kind of you know, it's still unclear what the output of this is all going to be. You know, it would be great to have some sort of paper showcasing some of this research we're going to go through today that will ultimately help other protocols understand what led the beanstalk community to the decision it made in regards to this pump configuration. And, you know, the hope would be or the thought would be that, you know, other protocols can kind of use the work that's been done and, you know, probably arrive at a similar conclusion about how to configure a pump to wear the pin stock community did. And therefore, you know, kind of the goal with this composability is to minimize the amount of code that needs to be written twice. So, you know, kind of the hope would be for other protocols to use the existing pump and or a variation of the current pump that might take out one or two of the Oracle values. But, you know, it's it's ultimately completely up to whoever is deploying the. Well, when you say Oracle values, are you referring to the the estimate in the IMA, the arithmetic in geometric means? Yes. I mean, currently, you know, the well employs a geometric image in a geometric estimate as well as, you know, a cap on the maximum percent that the balances can shift every block and kind of any of the three of those, you know, each, each of these three cost, you know, some amount of gas per transaction, upwards of 5000 gas. And, you know, let's say multi block MF gets, you know, fixed at the Ethereum protocol level. Now there's no longer you know, no, it's kind of pseudo random. You know, who who gets to propose the block and therefore, you know, multi block MF isn't as concerning as it is now maybe someone removes, you know, the the cap on the percent balance change. You know, maybe someone doesn't see a need to have a time weighted average oracle, you know, instantaneous oracles, you know, how tend to have more use cases as it gives you the price of something right now Bienstock requires the time weighted average oracle for a season where it's saying, what is the average over the last hour? You know, for something like a lending protocol or BDB, it probably needs some instantaneous manipulation resistant Oracle value that generally comes in the form of some sort of historical average. Look back to prevent against, you know, short term attacks. But really the goal is to know what the price of the asset is now, as opposed to over some time horizon. So, you know, it's very possible that someone doesn't want the time weighted average, you know, say, Oracle, and maybe they just take that out. They use the the balance cap plus the EMR or something. Okay, Totally make make sense. So it seems like we're sort of designing this to cover, you know, these four different values. So hopefully be, you know, used by anyone who wants to use any of those. Totally. And just to be clear like that, that geometric versus arithmetic is a design choice. They're both not the both of them are not going to exist in the well. But, you know, they are two examples of historical of, you know, statistical variables that have been used as Oracle historically, notably the arithmetic mean and Uniswap two in curve and the geometric mean in Uniswap V3. Um, also it should be noted that, you know, both those oracles were designed before the, which means they were designed not in the context of multi block MLP. And the purpose of today's conversation is to converge on which which mean we want to use is is that correct? You know the purpose of the call today is to converge on which mean we want to use arithmetic versus geometric. What time horizon do we want to set on the EMR? Oracle, I guess, you know, do we even want to use the EMR at all or do we want to switch that to an SMA? A lot of design considerations around that. You know, currently it uses an iMac, but it could use an estimate, who knows? And then thirdly, to decide on what percent cap to set as the max percent change in Oracle value per season. And you know, in a second we'll get into a lot of examples of, you know, kind of how it relates to Oracle manipulation overall. You know, as we walk through this exercise. Awesome. Thank you for the context. Really appreciate it. Awesome. So I guess, you know, kind of. But the next step is kind of what is I guess, you know, before we continue, anyone have any questions or thoughts or comments or discussion topics? That's great. So now to get into kind of the the first thing we have to decide is what is the maximum number of expected blocks for an Oracle manipulation to occur? And you can see here this outlines the number of times a year that someone with the given ownership this is this you know is labeled as percent. This should be times 100. So this is 5%, 10%, 20%, 25%. The number of times per year that a multi block occurrence will occur for a given party with that much percent stake. So, for instance, someone with 10% is, you know, going to get about half of in a block multi block media per year, everything that zero. So this only simulates five years. So everything that zero means that nothing occurred within the five year time horizon. But you know that you know so these are these are estimations. They're not mathematical, you know, determined probabilities. So they're not necessarily absolute. You know, there's only precision here to, I think, you know, 0.02 So it's like this is less than every every 50 years I want to say, yeah. So this means that, you know, if it zero, it means that it was 50 years of simulation. And, you know, that means it's not expected to happen within 50 years. This one, you know, should be, you know, kind of it seems like, you know, reran a simulation with less values. For some reason, it never occurred. This was just a random occurrence, it seems. But, you know, these are all very these these were all, you know, 50 years of simulation. So they're not exact values. So I guess, you know, kind of the question is, you know, given this, what first is the maximum percent ownership that should be taken into consideration? Five or 50, 50, 50, because this is one year and this is this times 50, because as an extra zero and the five. So this is 50 years. So, you know, the first question is like, where should we look on the percent ownership chart? You know, like obviously it's 30%, but it's split across 30 different entities. You know, Coinbase has around 11%. So, you know, should around 10% be our target? Here is four thoughts here. I'd be curious to know what an entity is like. Coinbase is percent ownership look like over time, like 11% less or more than it was, you know, around the merger. Not sure if you know the answer to that. That's a great question. You know, I honestly have no idea. You know, I know that, you know, entities with a large percentage stake have received, you know, a lot of, you know, kind of dissent from the the Ethereum community in terms of like, you know, kind of self capping the percent stake that each entity owns, you know, and it's and it's also really hard to know because withdrawals are not enabled. I think it's something like a month until it's expected, I think early April, maybe it's when it's expected that withdrawals will be enabled at the Etherium level. And, you know, kind of currently, you know, all of these entities have only been able to accrue more stake. So someone like VIDA, who's accrued 30% stake, you know, can't reduce the amount of stake they own even if they wanted to, because there's no way to withdraw. So there is some expectation that, you know, hopefully when with when withdrawals are enabled, there will be some sort of reshuffling of stake and, you know, larger, you know, entities with a larger stake in Coinbase will kind of throttle the amount of stake they have. And the hope is that there will be some sort of withdrawal from the larger entities to have a more, you know, to have things balance out. But, you know, it's impossible to predict what's going to happen going forward. But I would say that, you know, kind of a month after the merge, though, you know, when when things are actually in some sort of equilibrium in terms of both deposits and withdrawals, you know, it'll be way more defined what the actual breakdown is and and and what would the consequences be of making the wrong decision here. And, you know, we learned in three months that the configuration should be something different, making the wrong decision. That would just mean that you know the the maximum percent or call manipulation is, you know, slightly higher than anticipated. So say, you know, we pick a configuration that, you know, a seven block multi movie attack leads to a 10% Oracle manipulation and might be an A nine and a nine block. I mean you know multi block MPV occurs the Oracle manipulation might be 15% instead of ten if our target was ten, you know, now there's 5% extra. So so at that point it's like maybe there's some liquidity in the beneath. Well, it's like to some extent too late to I guess if a pump can only be deployed upon upon well deployment, probably not something that that would be changed in the near term. Is that fair to say? I mean, yes, that's fair to say. However, there's nothing that would stop, you know, someone or, you know, the general Bienstock community from deploying a new beneath the well and, you know, whitelisting that into the silo and whitelisting the old one, maybe decreasing the amount of seeds for it, kind of prompting people to shift their liquidity from the old. BD Well, to the new one. Thanks. Awesome. So, you know, looking here, it seems like, you know, somewhere between 10 to 20% is, you know, the target here. You know, 20% does feel quite high, you know, in especially in the fact that, you know, once someone has 33% of stake, you know, they have they're able to, you know, pretty much control 33% of the stations already. And, you know, the it would move the you know, so there's there's implications that the Etherium protocol security level already. So, you know, it feels like, you know, when we're getting into this territory, we're already getting into territory. The Ethereum network itself is already at jeopardy. So, you know, personally, you know, and it should also be noted that if any of these entities were to perform some sort of multi block MF attack, it would not it would be public. Right. Like everyone knows, all of Coinbase is validators. You know, I think there's some some way to see a list of their actual addresses. So for someone you know, for any of these entities to perform some sort of attack, it would be known. It would be known that, you know, Coinbase performed an MEP attack on the know being usdc pool or something. So that's an important consideration. So from this chart it seems like, you know, somewhere like in the 7 to 10 range is a maximum expectation. You know, maybe like, you know, eight or nine. Curious for thoughts on, you know, what people are thinking in this range. Apologies. Can you just remind us what the differences between these two tables? It's the same simulation run twice. They just kind of this one seems a little bit fucked. So I'm going to get rid of this one. This one is the the expected one. Got it. Yeah. I mean, should have added point 15 to this last. As you know, 0.2 feels a bit high. So let's just quickly see what and they won't have any thoughts in the meantime. Awesome. So maybe. Well we get in. Well, this is a running we can start to look at some actual, you know, data. So, you know, kind of again, getting into you know, so kind of the goal here, it's kind of given, you know, there's this target of maybe 7 to 9 blocks of a naughty block Emmy recap. And, you know, there's a few parameters that kind of be can be tweaked in order to, you know, minimize the the attack And the two biggest ones are kind of this cap on the maximum percent change per block. And the look back on the Oracle. I'm going to put a cap of 300 blocks, which is, you know, 3600 seconds, which is an hour, as you know, kind of the season time can't really be changed or the look back on that is quite fixed. So, you know, having a look back greater than an hour would require, you know, change at the beanstalk protocol level. So this being, you know, the number of blocks and so getting into this, you know kind of here we can see again just highlighting how that the the arithmetic Emma does not perform well without any sort of cap here. We're looking at the port, the proportional balance change. You know, this is a balanced manipulation of ten to the ten to the X. So for instance, this is about zero. If someone were to set the balance of ten to the negative six and this is it only goes up to ten to the three, you know, which you can see, this is someone say there's a million in a pool, someone increases the balance by 1,000%. So in this case there would be someone adding a billion liquidity to a pool with a million liquidity on each side. Let's say, you know, of you know, we we can obviously explore larger percent increases. But once we start looking at examples with the cap in place, you'll see that it doesn't really matter beyond a certain point. And this, you know, already you know this increase, it doesn't do you mean like times increase? Yes, times not percent. Apologies. Thank you for that correction. So yeah, it would be again a million on each side. Someone adds a billion of, you know, liquidity to one side, a billion on each side. Someone adds 1000000000000 to 1 side. So you can see already with the arithmetic, Emma, it's already way too high. We're already looking at, you know, 700 somewhere between 700 and 800% Oracle manipulation without the cap. So, you know, ultimately it was kind of this decision which led to use of the geometric DMA, you know, and then again, importantly, on the downside, you know, the geometric Emma performed substantially worse than the arithmetic. But, you know, again, it is you know, here we're looking at a decrease of 15% compared to in the above case, the arithmetic, Emma, goes up, you know, 600% and it you know, it feels like the geometric Emma on the downside is less volatile than the arithmetic on the upside and awesome. So I may have to just rerun all of this to get things synched up. Great. So let's so here here we're seeing Oracle manipulation as a function of the number of MEP blocks. Blue is a to block MEP orange is a three, block G is five, a green is five, red is seven, purple is nine. So for instance here being zero point again, this is the EMG, the 300 OB stands for 300 blocks and on the look back. So this is 300 blocks and here we're varying the number 300. You know, we're looking back a full hour and we're kind of varying the amount of blocks that the move occurs over and looking at how that affects the manipulation. So, for instance, you can see with the two block manipulation, you know, it's something like 5%, but and in nine block manipulation, we're looking at well over 100%. This is proportional change. So two is 100%. You can see on the downside, it's well below 80% on the nine block multi block MEP. That kind of point show that's attempting to be showcased here is that without some sort of cap on the maximum percent change per block you know it does not seem like it's possible to have a manipulation resistant oracle. And this, for example, is, you know, Uniswap V3 Oracle, you know, you can see that. And again, this one goes up to 1e6 on the highest end, which means, you know, someone's adding $1000000000000 to $1000000 pool, which is a lot. But, you know, one thing that's really important to note as a difference between the wells and Uniswap V3 is that, you know, the wells have an option to have no fee, which means, you know, any sort of multi block MF manipulation on the balances, you know, is going to be free. So, you know, again, say for instance, you know, someone like the US government wants to come and, you know, somehow try to finding an attack vector to be in stock. You know, they could theoretically mint infinite usdc through circle and, you know, be because there's a 0% fee in a well at the end of the multi block if they still get that you know all of the usdc they minted back and therefore it would essentially be free and you know, perhaps it makes sense to add some sort of micro fee to such that there is a cost to that kind of attack. You know, that's a discussion topic that, you know, I'm totally open to happening. But there are other ways to mitigate or manipulation which we'll get into shortly, I guess, before we continue. Or we can go back here and see kind of what what happened. So it seems that point 15% I don't know what happened, but it's like not being married, I think. Yeah, because that's .02 is the maximum anyway. Yeah, maybe this table was fucked, but you can see that, you know, at 15%, you know, it seems like, you know, again eight, eight, nine blocks is kind of the limit. So, you know, I think you know, we want to be looking at for a maximum multi block I mean the of eight nine blocks you know for reference in Uniswap article about how multi block MF affects there are oracles that use seven as the max but you know 889 might be safer. So now let's get into kind of uh, okay great. So this one so yeah, this one just is showcasing the different manipulation how, you know, with nine blocks the, you know, the, the effect on the Oracle manipulation is much less than that of, say, two block And and kind of you can see here that, you know, someone with 15% of stake maybe that's the upper bound for where we might expect Coinbase to go over the next few months. You know, nine blocks would happen every 25 years. Eight blocks would happen every other year. Seven blocks is a fairly normal occurrence. And as stated, you know, it's already happened so great. So let's get back into this. So what this graph is showcasing is the how again. So this is a two block multi block movie and it's showcasing the difference between the look backs. So is a one hour look back. Orange is a 30 minute look back and blue is a 15 minute lookback. So you can see how in the case of a to a two block multi block I mean the you know kind of the Oracle manipulation on 15 minutes is substantially more maybe up to something like 4 to 8 times more than out of an hour. So, you know, as we're going through through, you know, as kind of the last step, which is kind of introducing the percent cap into this problem, you know, we'll need to experiment with different Oracle look backs. And, you know, you can see kind of how, you know, the what do you look back is, you know, is is really, you know, important in terms of the maximum manipulation. So let's get into it a little. So now, you know we're looking at this is a seven block multiplier. Okay. Maybe maybe we'll just get rid of this outlier here. Oh, what is that? It's just Yeah, it's great. Okay. It didn't work. Oh, no. Okay, We're just trying to move this gray line because it's really out there, but whatever. We can leave it in here. We can see kind of. Now we are adding some sort of cap. Let's maybe just set this to like nine as that was kind of discussed to be the highest limit. And so here we can see kind of the effect of adding some sort of cap and maybe to showcase what the difference between no cap is. We can add no cap here. So kind of first, you know, kind of before before we we go ahead and remove this cap. Um, it's important to see what the cap so uncapped. This is a nine block multi, you know nine block MMP. You can see that the the EMA you know really does not are the the the arithmetic mean really does not you know is not resistant to any sort of manipulation. You know maybe let's just look at the geometric case so that you know, things aren't ridiculous here. So here you can see kind of the effect of some some sort of cap on this single block manipulation of, um, and maybe add to here to see them all, maybe not as too many lines. So this is the effect of putting some cap on the maximum percent change per block. You can see maybe we'll just remove this this lower bound. We need the lower bound. It seems great. Okay, here is the break even point. But you can see that where this cap you know, where where the normal line extends beyond and you know, kind of up and down the the other lines the geometric mean Emma has a cap of 100% per block, which means that the balances per block could change up to 100%. You know, this would be like someone dumping into the liquidity pool pretty much everything when you look at a normal function. I mean, I guess the question is so now to start discussing caps, you know, what is the cap? The cap is the maximum percent change that the Oracle is willing to accept as a legitimate value per block. And what this would mean is, you know, immediately after someone performs some sort of swap the next block, someone performs some drastic, you know, swap back. So in this case, 100% balance change would mean in the span of, you know, one block after someone last performed some sort of trade, someone either ads doubles the size of liquidity in the pool or performs some swap to kind of drain everything from the pool. You know, think that 100% is a pretty good limit in terms of, you know, the slippage incurred on a swap or add liquidity of 100% is so drastic that, you know, it, you know, so it wouldn't really make sense for someone to perform this action as a legitimate action. You know, in the instance where, say, there's 1,000,000 USD sea in a million beans in a pool, someone puts a million in, you know, they might recognize like 50% slippage on this trade. You know, one side in liquidity, maybe it's like 33% slippage in a constant product pool. But the point is like, you know, for someone to perform, for someone to move the balances by more than 100% in a legitimate fashion is quite rare. Someone could add two sided liquidity, you know, kind of adding both beans and usdc to the pool in equal proportion. But that wouldn't actually change the price at all. So, you know, if someone were to add more than 100% to site in liquidity, there would be a situation in which the Oracle balance, you know, is now kind of lagging behind the actual balance a lot, a little. But it should be noted that it's already doing that, given that it is already using some sort of historical mean meaning, it's always going to be kind of lagging behind the instantaneous price. So the the downside of adding a cap is quite low. You know, this is loosely, you know, kind of after we had already started, you know, working on these oracles, you know, I think it was one point during the, you know, kind of during the crash or the crypto crash, you know, I think it was you know, when SBF tweeted that FDX is liquidation engine has a maximum percent change of 20% in 15 minutes. So having a maximum percent change of 100% in 12 seconds is already significantly more valuable than, you know, what has been used by centralized alternatives. So, you know, just just, you know, a data point there. You know, you can see here that in this graph with a nine block multi block MEP, we're looking at something like 45% Oracle manipulation without a cap and with a 50% cap, you know, we're looking at something like 10% Oracle manipulation and with a 100% cap, we're looking at something like 20% Oracle manipulation. You know, now let's quickly showcase just for the sake of what this looks like with the arithmetic here. You can see that with the arithmetic with a 100% cap, we're already we're we're looking at an increase of up to 350%. You know, So for this reason, I still feel comfortable going with the geometric over the arithmetic. Even with a cap in place, you know, kind of looking at the the you know, even with a 50% cap, we're still looking at like a 50% Oracle manipulation. And, you know, the geometric mean is still performing better on the upside. On the downside, you know, the you know, both the arithmetic means perform better. But you know, kind of looking at this 50% AMD, you know, it has a 9% decrease. The 100% cap has about a 15% decrease. So already we're seeing so, you know, remember that and let's put the let's put the max back in. You know, kind of already we're dealing with ranges where it's a 50% increase to the upside of the arithmetic. You know, with the geometric, we're we're looking at very similar manipulations on the upside where, you know, kind of the 50% gets us about 15% on both sides. So although the arithmetic is extremely limited on its downside manipulation potential, think that geometric has a nice distribution where the maximum percent manipulation on the upside is equal to the maximum percent manipulation on the downside. So I guess kind of curious, you know, for any thoughts here in terms of, you know, does anyone have any strong opinions about, you know, where know, kind of kind of on kind of the cap be the arithmetic mean this is nine blocks. You know, in our long look back you know just curious for thought so far if anyone has any Oh, a couple questions. The first question is, you know, with these graphs here, do you know the alpha parameter we use for this or I guess what the alpha parameter effect, for example, the, you know, maximum manipulation given the block. Yeah. So it uses an alpha so parameter of a Yeah. There's kind of this formula where alpha equals one or equals smoothing over one plus. N you know, I haven't actually looked at the formula in a while, but kind of this is the general, the general formula used for EMH where smoothing and then smoothing is generally set to two and then N is the look back. So in this case, you know, in, in this graph we're using an end of 300, which means it's about a 300 block look back. So the alpha parameter is being tuned to in our long look back, you know why this formula is used with a smoothing of Q is because with a smoothing of Q in this formula, the the central the central mass of or what's a I don't I don't know the exact thing, but like the central mass, the center of mass of the EMA is the same as the center of mass as the estimate. So when kind of comparing the image to estimate, this formula is used as the you know, they tend to behave similarly in regards to time lookback. So kind of, you know, the expectation here is that this is, you know, a 300 block lookback, but we could include, you know a 30 minute look back here. And so you can see here that the green and the blue line are a 30 minute look back way to 50% in 100% cap, respectively. You can see it drastically increases the percent manipulation. But, you know, we could add in you know, we could also add in a lower cap of 25% per block, which, you know, isn't a ridiculous thing to do. And, you know, that behaves very similarly to the 300 block, you know, with a 50% cap. That's your question. And what's your next question? Yes. And, you know, I guess ideally you said that you during the upside, we wanted a geometric mean price, you know, almost if when we're in a downside, we want to airmatic. But the issue is that if we were to store both the geometric and aromatic and I guess to use that for Delta, we you know, while we will limit the I guess the population, you know, it would cost significantly more like 25,000 gas to store both, right? Yeah. I mean, it's a good point, Brene. You know, there are there I mean then there even could be some, you know, especially with the EMA because where you know, where each update of the EMA is kind of independent. There could be, you know a situation in which, you know, kind of it says if if the value is decreasing use the the arithmetic mean if the values increase in use the geometric mean you know kind of however you know it seems like I mean looking here it seems, you know, the you know, for instance, looking at this green line, the maximum percent manipulation is just less than 40. On the downside, it's just less than 30. So from some perspective, you know, it doesn't really seem like it wouldn't make too too big of a difference as the maximum percent on the upside is still greater than the maximum percent on the on the downside. You know, however, like if if this the downside is of particular concern in this graph, you know, there could be the potential to explore an option that uses some sort of combination of both. And it's just at that point it's getting into kind of unexplored territories where it seems like, you know, it's kind of just making some stuff up in terms of, you know, I'm personally not aware of any instance where tries to combine the two of them. And thus it's unclear whether, you know, doing that is like legitimate from an Oracle perspective at all. So, you know, personally I think it makes sense to kind of stick to one or the other. You know, both could be included. Sure. You know, maybe and then maybe some sort of average of the two is taken. But like, even then, you know, if the average of the two is taken, looking back at the maximum percent manipulation, you know, up here, it's like even if the average of these two is taken, we're still looking at 350% manipulation to the point where it's like the arithmetic mean is so bad on the upside that like unless we completely ignored it, it doesn't seem warranted to include it at all. Or, you know, unless we completely ignored it on the upside. Yeah. It doesn't seem and like, you know, maybe use it on the downside, you know, it's hard to know. But but yeah, I mean, it's like the men I mean of the two, I mean of the sort of volume. Yeah. I mean I would just be using the arithmetic mean when decreasing and a geometric mean when increasing which, which is totally an option. You know happy to explore that and you know unfortunately don't have that situation prepared in front of me. But you know, you know, kind of happy to explore that if that's something that people think is interesting. But what personally, I'd like to see some some historical situation in which someone, you know, has tried to use this before. You know, you know, some, you know, or that sort of distribution historically. But yeah, I mean, it's definitely warranted. You know, the geometric mean performance not great as it tends towards zero, you know, I mean like it so which isn't great. You know, though there probably need to be some minimum balance of one as like the Oracle parameter to prevent it from going too low. However, it should be noted that, you know, it's impossible for a pool to reach zero unless someone removes all liquidity. But, you know, definitely, definitely something that, you know, can be considered if people think is helpful. So, yeah, I mean, that's pretty much pretty much everything that we wanted to showcase today, you know, personally, this, you know, upon, you know, doing this, you know, reflecting on these results myself, you know, feel like the, the, the you know, either the point five GMA with 300 day look back with 300 block lookback or the 1.0 GMA or Emma with 300 block look back are the most viable also potentially the 25% with a 3030 minute look back I guess, you know, kind of looking a maximum percent change year between ten and 20%, you know, seems pretty pretty reasonable given that, you know, a nine block movie is going to occur, you know, could occur, you know, like once every 50 years or 25 years from Coinbase. Yeah. I mean, curious for people's thoughts here and kind of how you know, or what what what data people would want to see in order to further get closer Just in sort of conclusion here is the idea that these caps are implemented for every pump or specifically each pump. I mean, you know, I mean, again, like that decision can be made on a on a pump by pump basis. And that's kind of, you know, the point of abstracting the pump out. But, you know, personally, I think, you know, anything any Oracle existing, you know, in a world with multi block MEP should enforce some sort of cap as you know otherwise the the percent the percent manipulation could theoretically be infinite. The case of a you know, in the case of some sort of nation state attack. Yeah. Agreed I just think that you know perhaps are being if well you know the cap can be a little bit higher compared to with less very stable small you know pump for instance. Well I think 25 or 50% is a good number in my opinion. Adding 100 is a little bit extraneous. Great. So I guess, you know, let's let's zero in on that range. I mean, is there anything here? So from these four values, right, where it's like now we have 25% cap, 50% cap, 30 minute look back and one hour look back. You are you inclined towards picking any of these specifically? And if so, why? Uh, no inclination here. I would only have to take this a python. Uh, do you want to know if a can do some stuff on my end? Great. Well, go ahead and share this in the chat. Um, you know, I guess in the meantime, you know, kind of what what do people think makes the most sense in terms of, you know, trying to reach some sort of collective conclusion in terms of, you know, what, what should be done here here? This no books and then anywhere as of in what is some of these notebooks for people to play around with. Uh, I just put this notebook in the barnyard shot. Here's the other notebook which is used to kind of generate the, the percentages for multi block. I mean, maybe this notebook and I guess between the two hopefully, uh, you know, should be able to, you know, mess around with all of these values yourself, you know, I guess maybe given that, you know, it seems like people want to be able, you know, play around with these things themselves, the percentages, you know, kind of all this needs to be done is kind of create some sort of cell that looks like any of these other ones, MTV and kind of everything that's, you know, percentages is used to set the x axis. All of the these four kind of are used to define each individual line, you know and a performed like some for some for drawing on all of them where it's like for each one create a different block or create a different line. You know you can refer to you know, examples up here on how to switch to an arithmetic. You can add a semi here. I'm curious if I have an example of that already. I thought I had prepared something that compared the semi. But you know, I guess I guess not. You know, I guess I guess anything else that people would like to see today or you know, the people think it makes sense to come to sort of you know, there's no to come to any sort of conclusion today. It would just be good to kind of drill in on a specific range or at least, you know, curious to, you know, for for if this is substantial enough. You know, people play around with it for, you know, a day or two to come to some sort of conclusion or how do people suggest we we ultimately make that decision. Uh, one last thing before we I guess include we've, I guess, zoned into the 7 to 9 block range because we think the probability of like someone manipulating the well once every 50 years is acceptable. Is everybody generally okay that time frame or we want longer or shorter etc. etc.. It makes sense to me. I was hoping one of you guys could summarize what, what what are the other decisions that we'd have to make would be. So the main decision is, you know what to set here as the number of block look back and what to set as the cap. So for instance, we take these four lines. The blue in the green are a 30 minute look back on the Oracle with a 25% and 50% cap, respectively. The orange and the red on this graph are a, you know, 300 block or one hour look back with a 50% and 25 count respectively. You know, it seems like we've narrowed in on this range, you know, as somehow, you know, as being the kind of desired range. But ultimately, the decision we want to make is, you know, what is the look back is that that's what sets the alpha parameter on the Oracle. And what is the cap as that's what sets the gamma parameter on the Oracle. And those are the two parameters that the pump use has use for me. You know, actually I have a question on in the context of Beanstalk, what would the downside look like here? So if we focus a lot on the upside, which is which makes a lot of sense, and I think just from what you've been saying, we yes, I mean, I think going with the geometric mean makes more sense of the arithmetic. But I'm thinking like in terms like an attack vector, what would a downside be? Well, when we're looking at negative balance changes. Yeah, that's that's a great question. So negative balance changes. You know, could, you know, are likely going to be very similar as you say, you have a being usdc. You know Well, you know, a negative balance change could be the number of beans in the well going close to zero, which is, you know, kind of like if someone were to swap a bunch of usdc into the well and remove a bunch of being from the well. Now we're seeing a situation where the number of usdc in the pool is increasing drastically and the number of being in the pool is decreasing drastically. Now, in this case, what this would mean is that kind of, you know, Delta B would be much higher. And the PDB of the LP token would be much lower and the beam price would would increase drastically. There could be the opposite situation in which case someone dumps a lot of beans in the pool. And this would set the USD value the number of usdc in the pool too quickly approached zero and the number of beans in the pool to approach infinity, which would mean that a lot of soil will be minted or the LP token in the well would be incredibly overvalued. So there is no kind of there is not like, you know, it's not mutually exclusive kind of what is the effect of manipulation on the upside and manipulation on the downside, I guess, you know, kind of dump, you know, dumping usdc in the pool to buy Being is a lot easier of an attack vector than dumping a bunch of being in the pool. Just because dumping a bunch of B in the pool would require someone to accumulate a lot of been, which, you know, would probably mean they either have to like secretly pile up in, you know, in the background or, you know, kind of dump into another pool in order to then dump the beans back into, say, the B, you know, they could dump into the ether being pool and then dump into the being usdc pool. But you know, that would have a net neutral effect on Delta B probably, you know, honestly, it would depend on what the pricing functions are, etc. etc.. But, you know, kind of, you know, kind of I would say the biggest thing on the downside would be someone actually performing a swap in the B in pool by putting a bunch of usb-c in, taking the B out. And now the number of beans in the pool is quickly approaching zero. Right? Yeah. I mean, I guess for like for now I can understand why Uniswap probably went with the upside or the downside, but like in a future context where maybe demand for being in other protocols is much higher than it is now, that downside might kind of fuck us. I'm not sure that. Yeah, I mean, it's something to consider. I mean, would you say here that, you know, here there's a mac, like, for instance, looking at this red line where it's like the oracle is plus or -5%? I mean, I guess I would say it's like the the effect say there's a being usdc pool and it's currently balanced. If the usdc value goes up by 5% or the bean value goes down by 5%, it's probably going to have about the same effect on the minting. So what what seems to matter from that perspective is the maximum percentage manipulation overall, and it doesn't necessarily matter if it's upside or downside. I mean, that being said, it's like because upside is increasing the values, it is probably going to be slightly less. But, you know, given dealing in a range of a maximum of, you know, 5 to 15%, don't think the difference between, you know, 0.85 and one point, you know, one over .85 or 1.15 over one is kind of the difference. So one over .851 over .85 is like 1.17. So, you know, kind of a 15% or Khamenei duration to the downside is the same as a 17% to the upside, which is, you know, kind of about the same. So for already thinking in the range of like a maximum of 10%, it's you know the upside versus downside doesn't matter that much. Now, if we were thinking in the realm of, you know, a maximum downside of 50%, you know, a 50% downswing in is 100% upswing, You know, one over one minus point five is two. So as as the maximum percent Oracle manipulation increases, it's definitely more important to focus on the downside. But if we're already restricting ourselves to kind of narrow ranges, there is less of an effect. But you know, I really appreciate you bringing this up because it certainly is something that needs to be taken in the context of, you know, making these decisions is, you know, kind of what is the impact of upside manipulation versus downside manipulation. Oh, nice. I'm not a question on just multi block number. I guess this is where I'm at like type total ignorance here. But like, why kind of focus on the long tail here and not let the short side like in terms of using in a movie of nine or seven as opposed to the lower end? Can you just explain that a little bit more? Yeah, I so let's quickly I'm going to quickly add two to this graph. Let's just make this 0.5 just to showcase the difference. So here you can see with the one 50% cap at an hour long look back, the two, you know, is pretty much manipulated all, you know, kind of what what at least, you know kind of it might make sense to focus on is the worst possible situation right. An exploiter is not necessarily going to, you know, you know, and the exploiter, you know, probably playing their cards right is going to target the maximum possible impact they could possibly have. You know, But but it should be taken in the context that looking back here at this chart, it's like, you know, Coinbase could be performing, you know, 167,000 multiply MLP attacks per year, which, you know, that's like hundreds of day. So it's definitely important to be looking at this situation. But you know, if, you know, Coinbase plays their cards in the fact that they performed, you know, a multiplier like MTV once, even if it is a two block, you know, that gives the Bienstock Dao time to kind of react and, you know, kind of make some sort of change, you know, as in in response to that. So, you know, from Coinbase's perspective, it probably makes the most sense to wait until they have as many blocks as possible to then play their cards, because once you play your cards it's like then everyone else knows that you're trying to do this. And, you know, therefore some sort of reaction can be, you know, kind of taken into account. I'm not sure if that made sense, but you know, that the middle of and you know, some other things to note about things that had been considered, you know, kind of adding some sort of Boolean to the Oracle that specifies whether, you know, the current balance is outside of the tolerable range and kind of just decide eyeballing the Oracle altogether or disabling the ability to use the Oracle altogether if it's outside of the range, like prevent, you know, sort of any anyone moves outside of the range, you know, it kind of shuts off. You know, that's also an option. But you know that then there's kind of like potential downtime. And, you know, people probably want to start acting or reacting to any sort of manipulation immediately. So, you know, a lot of things to consider, you know, and it's really important, I guess, bringing up this point around, you know, 167,000 manipulations, let's say the Dow doesn't react is quite significant even at about a 5%, you know, change. So, you know, I appreciate you bringing that up. So in terms of next steps, do you think that it makes sense to construct some sort of combination on the upside and downside? EMR And the excuse me, the EMR and the the SMI or do you want to move forward with some sort of like written discussion on what are the since moving or should advance this and shall we continue this on the next dev call? Like what's the feel like? You're probably in the best position to guide us. Yeah. I mean, you know, kind of from my perspective, like I think there's enough here and kind of, you know, if, if there is a serious concern, like just going with something like a point five cap, our long look back, a maximum of plus or -0.5% or, you know, I guess we're looking at this one just added back. I mean, from my perspective, you know, like if if there is serious concern, you know, we can continue to restrict kind of the max in the end using the existing framework. You know, again, I mean, it's just using some of combination of the oracles. It's kind of, you know, honestly foreign to me. And, you know, it might end up in this situation. You know, I mean, I guess we would just have to do a lot more experimentation to see if that's a valid solution or not. And just general research from my perspective, you know, think, think, you know looking at something like, you know, either this this orange line or this red line, you know, depending on what the tolerance is like for 5% or 10% do seem like they're sufficient, sufficiently manipulation resistant. I mean, for, you know, like I mean I guess let's look at, you know, kind of the goal is that the nine would never happen. But we look at seven you know we're looking at a23 percent manipulation in this 25% case. We're looking at, you know, a 5% manipulation in this 50% case. So from my perspective, why why are we considering the 30 minute look back? Just wanted to I mean, you know, kind of just wanted to showcase the difference in that that is an option. But you kind of as far as minting is concerned, it's the hour long look back. What the 30 minute look back would be is for the instantaneous oracle so that only, as you know, you know, kind of depositing the BBVA, you know, kind of the BDB oracle would be, you know, the 30 minute look back if desired. So fermenting unless the season time were shortened 30 minutes, it would probably make sense to use a 300 block clock back. But then what you're suggesting is for the deposits which will use or at least currently use a more instantaneous but much more maybe manipulable Oracle, or the idea is to use potentially something shorter than an hour potentially. I mean, I guess kind of it's a good point in the fact that, you know, kind of they need to use the same cap. So it'd be like if we chose the orange fermenting, we would have the option of setting the blue for BTV. But, you know, maybe, you know, I mean, I mean, I guess it's like the one point it's like two on one, you know, kind of axis. It's like, you know, some sort of blue, you know, kind of BTV manipulation might be considered to be less less impactful from the fact that it's like if someone gains a lot of stock, then, you know, you know, they're being stock. Dow could, you know, kind of, you know, perform you know, do some kind of a vote to potentially remove that. But even like five or 6% isn't going to make or break bean stock. So, you know, I guess it's like but, you know, to some degree, it's like it could make sense to leave them at the same level. But to your point, I mean, really the bigger decision is kind of between the 0.5 cap or the 0.25 cap. And that's because the way that the pump works is that you can look back over multiple lengths without requiring any additional gas cost, but you cannot look back over multiple caps without increasing gas costs. Is that correct? You can. So for the for the the time weighted average, you can look back over longer lengths on for the MRA. It it's going to be coded into the well itself and the cap has to be coded into the well itself as both of those the length and the and the cap. So for the for the semi the length does not need to be encoded in as it's know time weighted average you snapshot at the start and then you snapshot the finish take the average over the snapshot. But given that we're considering EMAs it's not that's a yes but minting will always use the semi so it's like if we set the the EMR to 30 minutes the season would still use an hour but BTV would use 30 minutes and you know unfortunately going to have to go mobile here in a second. So I'm going to have to end my screen. But you know, hopefully everyone has the ability to, you know, run this locally and spin up some graphs. You know, maybe we'll just screenshot this one in particular and put it in the barnyard chart so people can look at it there if they want. Once they when it comes back. So I'm here. All right, great. So can you maybe explain why minting will always run on an estimate? Yeah, I mean, kind of the goal is to mint based on the average over the entire season. And, you know, with the EMA, you know, it gives kind of weight more recent values higher for something like minting what seems to be the goal is to mint based on the average over the entire season equally meaning if the being price was slightly higher in the first 15 minutes, then it's important to weight that equally to the last 15 minutes. In the case of the EMA, it would drastically overweight the last 15 minutes. And given how, you know, as far as minting goes, it feels like every second should be weighted equally. And that kind of is what you get with the EMA or, the SMA, whereas like every second of time ever is weighted equally as far as being stock minting is concerned with the EMA. Again, it would be waiting the last, you know, 15, 30 minutes of each season drastically higher. And maybe that is the goal as you know, kind of that is the actual price at the time and bean stock should wait the last 1530 minutes of a season. More, as you know, say it means beans. It takes people 1530 minutes to dump. And therefore the lot the price at the end is more important. And but at least as far as being stock has historically, you know, kind of you maintain it's equally weighted every single second and I guess, you know, would open up the floor. Now if anyone has on kind of changing that and moving to an EMA for minting, I think it's definitely worth considering, particularly given if you look at the history of being stock and soil issuance that used to be an average over a like a cumulative average over multiple multiple seasons and the over time it was realized that that was inefficient and it was better to issue soil based on just a single season with a cap as. Well, based on the the supply of beans. And there's an argument to be made that the you know, it's more even more efficient to query some period of time closer to the end of the season, such that bean stock is responding to the latest data, assuming that the data is assumed to be manipulation resistant, which is to some extent the entire exercise that we're going through here. So if there's a manipulation resistant query over the past hour, but weighted towards the past, let's call it 30 of the season, the last 30 minutes of the season after you have the morning auction and then you have some sort of price discovery across the various markets thereafter. And it may take some time for the system to find its equilibrium for that season. I don't think it's a problem per say, to have the system weight more heavily the back half of the season. On the upside, it's a little bit more interesting how many beans to mint because that in theory you could have interest season volatility towards the end of the season to facilitate minting which is worth considering. Yeah I mean great points you bring up definitely agree that it's something that should be considered you know I guess it would be important to do some sort of analysis in terms of you know how does how much does the weighting actually change? I mean, I guess kind of as as a data point, you know, given this, you know, you know, the EMA doesn't have like a fixed look, it actually only has some sort of, you know, kind of I mean, the way it works is it multiplies the previous value by some constant and then adds the current value by one nine is the constant. So it weights the last value by some, you know, close to one number and then weights the present value by some, you know, very small number and, you know, kind of so an EMA with an hour long look back, you know, only 5/6 of it is actually determined by the last hour of data, you know, which doesn't necessarily mean anything in terms of it shouldn't be used just is another data point to consider. So it's like, you know, if we go with the hour long look back over the season, you know, only five or six of that data is new data from the current season. Now, obviously, time frame could be shrunk to like 45 minutes if if the goal is to encompass more of the time in the last hour. But you know, just more food for thought I mean so yeah, go ahead. I guess just one more point to make is, you know, not that this really matters that much, but, you know, kind of if the decision to use the EMA for minting is determined, it would, you know, first off, decrease the cost of the wells overall as now there isn't really a need to use an estimate for anything. You know, the Oracle would only need to track in the EMA, but you know, it would make it. So the decision to use the estimate couldn't be undone without deploying new pools. And it decrease the gas cost of the season as it no longer has to snapshot the start timestamp. Just a couple of things to keep in mind. All right. So think that at this point, there's so many things to consider and parameters to optimize on that there needs to be something more formally written so that people can consider it because to be frank, you know, it's going to be hard for people to spend 2 hours listening to this and. A lot of people will come out still not really understanding what's going on. And so it's important to to define a lot of this stuff in written form. And I think don't necessarily that doesn't necessarily mean we need to circle back on this in two days unless you want to. But I think it would be very helpful if there was something there was something written up for us to sort of collectively walk through on the next time that we talk about this and consider the trade offs explicitly. Sure. You know, definitely, you know, happy to write something up. It just, you know, I mean, at least on this end, writing something up is obviously going to push timelines on any sort of deployment by probably a couple of weeks in so long as everyone is on the same page that that is acceptable. You happy to move forward with some sort of formal, formal writing up here? Yeah. Yeah. Not sure that, you know, we should be making that decision, but you know, that's probably something obvious. I think we lost the last few seconds, although, you know now. Sorry, I'm a there's probably some I mean, it would probably make sense to have like a little straw poll in some capacity with with stockholders on how important it is to formally define all this stuff and how that affects timelines. But, you know, on this end would probably support doing it rigorously or at least doing it semi rigorously if it took an extra week or two. That seems like a no brainer. So there should probably be some sort of vote on that vote on whether the write up should happen or what the parameters would be. Well, well, there will be a vote on the parameters when the pool is deployed implicitly. There should probably also be some sort of vote on the parameters beforehand. But at this point, talking more about what is the right level of discourse to be had on the pump front. Right. So what the question is, is this type of dev call plus a very short write up from someone that was listening to the call or a summary write up from someone that was listening on the call, you know, is that sufficient? Should we should we collectively be working on more of a formal paper to define all this stuff, as Truworths had suggested at the beginning of the call? How do we want to really think about because this is pretty important, both from a security perspective and from a public good perspective to make it very clear what's going on. And therefore it's, it's yeah, it's probably maybe that work can happen immediately after the wells is deployed, but then to some extent it's like, well, maybe the research says the initial well is not properly deployed and then we'll have to migrate it, which you know, maybe that's acceptable. So yeah, suggesting that there is some sort of proposal, maybe not a formal vote, but a straw poll on what people think about in terms of immediate timelines versus, you know, spending an extra couple of weeks as a community, really getting into the nitty gritty on the Oracle stuff. I mean, there's also an argument to be said that this is like such such an important thing for being stock to get right, that it should just, you know, timeline should be pushed back regardless and you know that it would be interesting to gauge community sentiment on all this stuff. So I think it's more of a straw poll that makes sense more than anything but I don't know, maybe that's ridiculous. I think in a way, to your point, it's hard to discuss the thing at all without having some sort of write up. And it's like, I'm not really sure what it would mean to have a straw poll to figure out whether we should spend the time on the write up itself, because if so, it's like, what are people what information do people have in order to inform their vote like this call recording, I guess. Yeah. And then maybe some sort of short summary that someone can write up based on this call. Yes. Something to think about, I guess. I mean, if you're asking me, it sort of feels like overkill or does feel like overkill, particularly when we're talking about something on the order of 1 to 2 weeks. But yeah, I'm not not sure that is the order of magnitude that the was talking about or maybe it probably is can help us better understand better understand what we're attuned to the extent we would like to have a formal write up. Yeah I mean honestly, I mean not sure I mean like what a formal write up would be is, you know, I mean, like, for instance, there was no numbers here in terms of like there's more analysis that could be done in the first place. Yeah, there's no numbers here that kind of combined the situation that I think it's Sophocles brought up or it's like, you know, some combination of upside and downside manipulation in tandem. You know, there's no here basically saying like if the if the there being usdc pools have this much what is the maximum you know beans that be minted through manipulation of x blocks you know the maximum actual. But you know I mean I guess it is kind of implicit in some of this data. But, you know, to me it's like a formal analysis would have to take everything here a step further in terms of like, you know, should actually specify some numbers for these things. You know, just going to have to write up pretty much everything that was said on this call in terms of introducing the context, walking through the history of on chain oracles, you know, kind of some very in-depth comparison of EMA versus estimate arithmetic versus geometric geometric kind of all this stuff. I mean, I don't know. I mean, I guess I'd be curious to hear kind of what people think is important to include in some sort of write up. You know, I mean, it's just hard to imagine any sort of, you know, formal proof read, you know, well articulated in an article being written in anything less than a week or so. But, you know, perhaps could be completely wrong on that front. I mean, I think it's a week. Go ahead. I'm clear. And I'm thinking in my head myself, I mean, given that I've just sat here and listened to this, it's much more understandable and digestible for me trying to see it from the perspective of someone who wouldn't have even listened to this call and just looked at it, you know, from baseline. I don't know. I feel like, I mean, something elaborate might honestly be overkill in and of itself. Like just defining like what kind of you showed here with, okay, we're going to use either estimate or any way that we will have to go in depth even with like going into the back history of all the Oracle stuff and like defining the parameters and then giving the examples of what it would be new SDC, A, B, C, C, Paul, What would happen if we do X with this parameter? What would happen if we do Y with this other parameter? And it shows and graphs, and if people don't understand it from there, then that's unfortunate. I would just Yeah, plus one Sophocles, you know, point around the level of depth of, of, of a write up of sorts. I'm certainly not thinking of a, you know, formal white paper level thing with a documentation of the history of on chain oracles or anything like that. Not that that wouldn't hurt but ah, not that that would hurt. But you know, it's all about the cost of time to understand very helpful color, you know, I mean, so but in in Greenman in the sense that like you know it's helpful to at least have you know, spend a couple of days writing up something so that the broader community can digest everything that was kind of discussed on this call. And, you know, without having to listen to this full two hour kind of talk and watch it and watch it, maybe a different question, but how long would you hope that this pump can serve as an oracle for beanstalk? You know, that's a great question. I mean, the hope would be kind of I mean, you know, there are there are several things to consider. You know, the first of which is at what point does some sort of on chain oracle like this become, you know, just unnecessary to where, you know, an oracle has moved to some kind of provable off chain computation an. Oracle has moved to the protocol level or, you know, there's some sort of order with, you know, some Oracle solution that exists at the same time, you know, kind of solving, you know, or, you know, in a theory and protocol upgrade, you know, to remove the ability for multi block MFC to occur through, you know, single slot random you know or elections, whatever, you know, also makes it so that the potential, you know, attack vector of this oracle is drastically decreased and you know perhaps there's no need to add a cap at all as you know kind of any sort of multi block manipulation of the oracle would work with, you know, a significant risk vector in the sense that anyone could dump on them. So probably a timeline of 2 to 3 years, you know, with the expectation that in 2 to 3 years, maybe 1 to 3 years, but probably 2 to 3, the expectation in the next 2 to 3 years either, you know, some kind of alternative of more efficient, all encompassing Oracle solution becomes commonplace in terms of, you know, off chain computation or protocol level. Oracle or, you know, there's an a theory and protocol upgrade to remove the ability for multi block MEP to even occur. Very helpful. I think that's the sort of data point you know accompanied by a very short paragraph of explanation is the sort of thing that I would find very helpful and in some sort of write up. So should this write up be proposing like be stating the question and essentially be a tool for people to, you know, then decide, you know, what the parameter should be or should the paper actually propose, you know, a set of parameters that feel optimal given the analysis? It's a good question. Personally, feel like a recommendation would be appropriate, but it's curious. It's, you know, it's what is the analysis? And then here's I do think it's appropriate for an opinion as well. But the honest thing to do is to explain what the analysis is and why this is the analysis. Awesome. Thank you. This is all super helpful color. So, you know, I think it's clear at this point that the next step is to, you know, try to, you know, formalize this in some sort of short writing that, you know, basically proposes this everything that was discussed today and a succinct, digestible format for a broader audience. And at that point, you know, it's kind of be discussed, you know, and at that point, some decision on these parameters can be made. You know, I guess the in terms of how impactful the decision is, like the gamma and alpha parameter, which is the cap and the number of blocks, look back, you know, our parameter is of the Oracle itself. That can just be changed if the decision on the sunrise side is made to Mint based off the EMA. Instead of the say that will actually require a modification of the on chain of the actual source code and therefore have implications a you know, kind of an audit perspective. So, you know, just wanted to, you know, to note that. But, you know, I think kind of it's clear that you know, it would be incredibly helpful for the Dow and the community at large to be able to, you know, really wrap their head around all of this in a in a fashion. So on this end, you know, I think, you know, that, you know, kind of nothing. You know, I feel like the next steps are quite clear and, you know, appreciate everyone for hopping on the phone there and listening to this call and, you know, giving giving your inputs. And I guess, you know, if does have an opinion as of now in terms of, you know, kind of what Oracle configuration they might be leaning towards would appreciate kind dropping it in the barn Barnyard chat just to get kind of in a preliminary sampling on the on this Yeah a thread would be great but I mean just any sort of preliminary survey in terms of where people's head or what would be incredibly appropriate for those who, you know, have have something. All right. So we'll put a bookmark in this one and in a week or so, hopefully have something to talk about in the next step. One of those dev calls. Amazing. Last thing on the is do we want to start plugging in the dev calls for 3 hours so that we can get to more than one topic? Because, you know, again, we're just there's so much to talk about it's yeah it's hard to hard to disagree Yeah I mean an hour on both on on both the Tuesday and Thursday calls as they're currently scheduled works here so we can do that although there isn't a good way in discord to make it clear when an event should end. But we'll figure that out. Does anyone dissent this too long or long enough? It can be shorter, but is this long enough? I really thought we were going to run out of the stuff by having two of these two hour calls. But I mean, I'm not a postal worker, but I also enjoy it. But I know probably there are going to be times where it's not going to be 3 hours. So, yeah, but we could just do another topic and we are going to do the whole I.R.S. standard thing. And we didn't even get to that. But it's like we do that for an hour, we don't even finish that. But we can try to do like a longer topic in a shorter time. It or not. Okay. Yeah. I feel like giving it topic based is probably better than, you know, downing it by time. Maybe you want unbounded. I can't do. I mean, no, not unbounded. Yeah well, we're looking for about 24 hours in this discourse. No, no, I'm saying the complete opposite. Like I feel like keeping it to a topic as opposed to maybe, like, Yeah, time wise is probably a better thing. Like, I don't know, after 2 hours, like, my brain is very fried and looking at, you know, 20 graphs. So, you know, if we were to start something elsewhere now, I feel like, you know, people are 25% concentrated. I could be wrong. Would it be crazy to, like, come back an hour and a half before class or something like that, like two separate calls. So just go do some first off, you. Exactly. I think we got we got things to figure out. I do actually have to have more, but maybe we can figure it out. We'll figure it out on our talk soon, guys. All right. Thanks, everyone. Sit.