- Meeting Notes
- What are the different terms associated with Pumps?
- EMA vs SMA
- What do Pumps currently support?
- Current Pump implementation
- This will not be a normal class, Publius will ask Publius a bunch of questions during pumps
What are the different terms associated with Pumps?
- An oracle is used to determine any data point that exists anywhere in the world on or off-chain. A Pump is an oracle for a Well. A Well is an AMM. The Pump will provide insight into the value at which the Well is pricing the assets inside it. Historically, the most natural price oracle is what is the price of asset A right now. Beanstalk uses this type of oracle right now when a farmer deposits in the Silo. Since anyone can move assets within the pool there can be oracle manipulation with flash loans. Uniswap v2 used an arithmetic mean as an oracle. This was used in the Bean Eher pool for minting. In Uniswap v3 there is a geometric mean is being used. Geometric mean multiples all of the inputs together and divides it by the nth root. An important note with the geometric mean is if any value is zero the mean will be zero. There are a bunch of ways to quarry these pricing oracles, instantaneous value of something, meaning the value of a certain asset right now. The second is the value of something over time. The sunrise of Beanstalk takes the average over the season for the delta B.
EMA vs SMA
- There is a need for getting data back in real-time. Therefore there is a need for moving averages. There are two main ways to have moving averages. EMA and SMA can be used for both instantaneous and cumulative oracles. You can use the nth time in the past and take an average of those values to see some central tendencies and prevent short-term manipulation. EMA and SMA both use some aggregation of nth values. The difference comes with how much they weigh each of the values. SMA all values are weighted with 1 over N. The SMA weighs all values the same. EMA is different given some alpha A (between 0-1) the EMA in time t is equal to EMA t-1 times 1-alpha plus the new value times alpha. This could be written differently.
What do Pumps currently support?
- Arithmetic mean requires a lot of math or conversion into log terms and it is a geometric mean. There is a way to implement it but it is very expensive in gas costs. An SMA could be cumulative or instantaneous, Uniswap v2 uses a cumulative arithmetic SMA. Uniswap v3 uses an instantaneous geometric SMA. If you are using just an instantaneous SMA is it required to update some running sum. It costs 20k gas to store one occurrence this means every time someone performs a swap it would increase by 20k gas per balance. Through the use of cumulative, you only need to read 2 data points and you can get all of the data points in a snapshot. With a cumulative oracle, you are required to snapshot a start but with an instantaneous, you do not need a start. SMA is a very expensive on-chain when used for an instantaneous oracle. The EMA only requires one on-chain update. SMA there can be a variable look back. EMA alpha controls the look back.
Current Pump implementation
- The current Pump stores a Uniswap v2 style SMA but it uses a geometric mean. This is because of how it deals with outliers. There will be an EMA for instantaneous snapshots. There is no flash loan resistance right away but it can be capped at a certain percentage. This is expensive on-chain but it is important there is resistance
Okay. I think we can. We can get started. How's it going for this? It's going mad. I think we're going to try something a little bit differently today. As. As you and I have discussed. So do you want to kick it off or you want me to kick it off? Why don't you? And I would. I'll sit and. Amazing. So for context, there was a dev called today for about 2 hours previous was presenting the work that they've been doing on pumps. And there was a lot of information presented in a very short period of time. And hopefully over the next week or so, we will be able to produce some sort of more formal written content to better articulate the trade offs and decisions that need to be made around pumps, but figured it might be constructive to do an hour of Q&A about pumps because we didn't really get to do much of that and figured it would be smart to start from a very basic perspective of just talking about what the different options are, what the different parameters are, and how how things have been architected and just cover things a little bit differently than they were covered during the initial presentation. So maybe taking us off. Could you talk about, yeah, you were using all of these terms on the call. Some people may not be as math oriented. So there was EMR, there's GMA, there's FEMA. Can you maybe start talking about the different terms and what they mean in the context of a pump? Yes. Happy to. So, you know, kind of what is an oracle used for an oracle is used to determine the price or, you know, any sort of data point, you know, that exists kind of anywhere in the world could be on chain or off chain, but kind of what a pump is, is a pump is an oracle specifically for a well, a well being, an implementation of a AM. Most, you know, you know, generally going to be used as a constant function and and the idea is that the pump, you know, provides some sort of insight into the value at which the well seems to be pricing the assets inside of it. You know, so so naturally one would say, okay, you know, you want and so kind of the question becomes how how you know, in what different ways do you want to evaluate, you know, the price that a pump is displaying? You take a standard, constant product pricing function and then, you know, we're X, y equals K, you can compute the price in the pump by dividing X by Y. So if there is a pool with being in USD there, there's a million of each token in the pool you can do, you can divide x the number of beans in the pool, let's say. Why is the number of usdc in the pool by each other? A million divided by a million to get one. What this means is that this pump acting as a constant product, you know, AMD is willing to offer a user to exchange one bean for one Usdc or one Usdc for one bean. It's important to note this. This is merely just an instant Haney's price. And the actual price is a function of, you know, for someone to trade into the pool. The actual price is a function of size. So, you know, kind of if someone, you know, for instance, were to put a million, a million usdc into this pool. I believe they'd only be able to take out something like 500,000 B as the price between the assets shifts, as the pool, as the balances, as the ratio of tokens in the pool becomes unbalance. So how does a you know, how how do his or, you know, at least historically, it's important to you know, it's important to take into context historically how different attempts of amms have tried to provide some sort of baseline and for ways to query the price within a given aim. Now, one of you know, so so of course, the most natural way is what is the price of something right now or what is the value of something right now in terms of another asset being stock performs this type of evaluation when someone deposited into the silo, when a user deposits being three curve into the silo being means to, you know, through some sort of, you know, or you know, through some sort of oracle determine what is the value of being three curve in bean in order to determine how much stock in seeds to give to the user when they deposit. So you know this you know, one example of an oracle is an instantaneous oracle. And, you know, one might naturally think and, you know, kind of let's, you know, kind of let's query the current balances of the pool to get that price. You know, if there's 1,000,000 USD in a million being, just query the pool now and say the price is $1. But you know, kind of in the on chain world, you know, the fact that you know, kind of anyone can move some of the balances in the pool has significant ramifications. You know, most notably, you know, something that happens commonly on chain is a flash loan where in a single transaction someone performs some swap in a pool imbalance, you know, or, you know, some Oracle manipulation through a flash loan specifically. And what this would mean is someone performs some sort of swap into a pool, perform some sort of action that evaluates the instantaneous price in the pool and then sells back into the pool atomically, which can be done on chain. As you know, as everyone knows, that transactions in the the the EVM have to happen synchronously. There's no way for any sort of asynchronous actions to occur such that someone could perform a sell into the pool after ah, you know, the the first by let's say it's a buy that the that the you know manipulator does so kind of this this immediately gets you into the territory of it's not feasible to use the current balance in the pool as in Oracle because it can be very easily manipulated through some sort of, you know, atomic manipulation. Now, um, the next this this then, you know, begs the question of how can some sort of statistical variable be used to estimate a manipulation resistant value of the balances in the pool, which is the you know, used to determine the price. And you know, the first about this, as far as we're aware, is in Uniswap V2 and Uniswap V2 implemented in arithmetic, kind of some of the balances of the pool over time are, you know, over the prices of the pool specifically, not the balances. It would compute every time someone swaps in the pool, it would compute the instantaneous price of the previous balances in the pool and add that to the cumulative price counter, both in the X over Y direction and in the Y over X direction. Now, this, you know, proved to be a great implementation of, you know, you know, or a great attempt at an initial implementation of an oracle. You know, the Uniswap v2 Oracle was used in the original being stuck for both minting, you know, the liberty of being if LP tokens. And really just in those two cases now what's you know kind of there are a couple of things that became clear about kind of ways in which you know kind of feature manipulation could occur on this kind of oracle and that's that um, you know, there, there and I guess we need to kind of get into the different types of means. So, you know, kind of now considering, you know, kind of the goal is to perform some sort of time historic average of variables in the pool, you know, across, you know, recent history, um, let's say over the last, you know, 300 seconds or the last hour and perform some type of statistic that takes into account the balance of the pool at every single point, you know, such that for someone to manipulate the entire Oracle, you know, there would need to be some sort of manipulation of the balances in the pool at every single time in the past. And now the most, uh, you know, the most, I guess, you know, the most conspicuous way to perform some sort of, you know, aggregation of historical values is through an arithmetic mean where simply, you know, all of the values are submitted and divided by the total number of values. You know, kind of in uniswap V3, the use of a geometric mean was actually introduced. And, you know, kind of interestingly back to when I think it was, you know, pythagoreans was, you know, first first, you know, you know, working on his mathematical theories, he came up, you know, he he did research into three different type of mean or means, the first of which is the arithmetic mean, which is kind of an additive mean where it's, you know, the sum divided by the number of occurrences the second of which is a geometric mean. Now a geometric mean is a multiplicative mean in a sense that instead of adding all the values together, it multiplies them together. And instead of dividing by the number of occurrences it takes the end root of the mean. So, you know, you know, kind of in the same way that, you know, kind of so, you know, it's kind of moving, you know, you know, kind of everything into exponential territory where, you know, if you were to take the log of a geometric mean and perform it or, you know, if you were to take the log of a series in compute the arithmetic mean on the log of the series, it's the same as the geometric mean. So what does this mean in terms of how it actually affects the way that these statistical various, you know, these statistical formulas, you know, react to variations in the data? Now, for people who are familiar with kind of, you know, some form of log distribution, you know, just go on some, you know, graph log acts. And what's really important to know is that, you know, the the second derivative is decreasing. And what this means is that the respective, you know, increase, the first derivative is decreasing. So the delta for variables that are farther away from kind of, you know, that are higher is is lower. And what this means is that when the, you know, a geometric mean is evaluating values that are kind of outliers, the geometric mean, you know, kind of taking into the fact that, you know, kind of log a you know, kind of log A plus log B is going to be less then A plus B, you know, kind of kind of implies that, you know, kind of the geometric mean is always going to be smaller than the arithmetic mean. And we're we're dealing with strictly values greater than or equal to one here. As you know, it should be noted that a property of the geometric mean, you know, given that it's A, times, B, etc., you know, means that if any of the values are zero, the geometric mean will also be zero. So I mean, I guess I guess before continuing, you know, kind of gave a bit of potentially a non structured brain dump there, you know, curious probably as to kind of your thoughts on on what has been said so far. So maybe just to clarify again, M.A., SMK, DMA, juxtapose the three of them one more time. Well, you know, we haven't quite gotten to the the MRA versus the estimate yet. Really just evaluating. You know, I've only discussed the geometric mean and the arithmetic mean and the different ways in which these means react in relation to, you know, variation in the underlying series that the, you know, the statistic or formulas or evaluating. Great. So maybe just then can you can you then talk about I guess it seems like now we have a two axis framework to consider, right? Whether to use an EMR or an asthma and whether to use a geometric mean or an arithmetic mean. Is that correct? Yes, this is correct. And, you know, just would also say there might be one more access to consider here in the fact that there are two different ways in which one might be wanting to actually query an oracle or query the result of one of these formulas, the first being the method I mentioned earlier, where someone wants the instantaneous value of something, meaning the value right now. The second is someone wants some average value over a period of time. For instance, the Sunrise and Beanstalk currently takes the average over the season as opposed to the value right now and uses the average over the season in the Delta B calculation. So there's an additional access in first, you know, how what type of oracle are we dealing with? Is this an instantaneous oracle or is this some sort of cumulative oracle where it's taking these sum every single interval as opposed to just updating some sort of value? Secondly, what type of mean are we applying to the underlying data? Is it an arithmetic mean? Is it a geometric mean or, you know, there is a third type of mean that's frequently used called a harmonic mean, which you know, is also, you know, kind of so the combination of the arithmetic mean the geometric mean and the harmonic mean, you know, kind of cumulatively, you know, you know, the collection of those three are the first three, you know, means, you know, that are referred to as the Pythagorean means and, you know, kind of form the basis of the three ways to kind of measure some sort of central tendency within a given set of data points. Okay. So there are now three axes. So far, you've talked about arithmetic means and geometric means. Do you want to talk a little bit about harmonic means and why you haven't discussed them thus far, or you don't think that would be helpful? Yeah, I mean, you know, unfortunately, you know, I'm not too in the weeds on harmonic mean, you know, it is, you know, my understanding that you know kind of harmonic means are more used when dealing with rates. But quite frankly not too familiar with with harmonic means. And, you know, I'm not aware of, you know, kind of the the arithmetic mean kind of naturally is, you know, is the most natural mean to consider. You know, the geometric mean is, you know, quite present in, you know, kind of financial, you know, kind of in, you know, common financial applications in the fact that the you know, the arithmetic mean is not great, you know, kind of is better for distributions in which there is an independent set of data points which, you know, arguably a time series is not. And, you know, kind of thus does, you know, there tends to be, you know, use case for the chair geometric mean and when it comes to kind of time series problems but yeah not too familiar with the harmonic mean but you know it is worth mentioning and perhaps some research is warranted into its use case and if to to return to the cumulative versus instantaneous axis if the only thing that needs to be queried is the instantaneous value, is there still the requirement for a pump? And if so, do the requirements of the pump change depending on whether the value that is being supported is cumulative or instantaneous. So, um, yeah, I mean, if like, you know, if you're saying if there is no need for an average over time, is the only way you know, do you only need the instantaneous mean. Was that the question. Well yes, the question is do you have to specify in advance whether or not you need the instantaneous or the cumulative of one of the one of the means? Mm. You know, well this is ultimately depends on the actual implementation and you know, kind of there are numerous kind of efficiency and optimization considerations, you know, to be made about how you want to, you know, kind of specify which oracle you're using, you know, but any of them could be implemented in any fashion and you know, only when restricting ourselves to the realm of, you know, reasonably, you know, reasonably costly computation in the EVM, do there start to be limitations imposed around how and when you know how to implement the different types of oracles and what that means insofar as when you have to call them and you know how you know, kind of do you need to specify them. You know, this is most notably the case for the SMP. You know, happy to go into that a little bit more. Sure. So before you do that, can you talk about an EMR versus an estimate, a an SMI is a you know, what the you know, kind of given that, you know, it is always desire to evaluate, you know, something in the current time, you know, there is a need to use a moving average as as time progresses the statistical variables which are averages need to kind of move alongside the present. And there are two main types of moving averages, the first of which meaning mean we just just try to cut you off, meaning the arithmetic mean or the geometric mean. Those values need to be changing over time. And so the question is around how those values change over time. Exactly. So in in this case, these would only apply to the cumulative Oracle and not the instantaneous oracle. These it applies to both of them, as you know, both the instantaneous and the cumulative oracle, you know, perform some type of aggregation of data points and whether to aggregate the data points arithmetically, you know, meaning the summation divided by the sum versus geometrically, meaning the, you know, multiplying together and raising them to, you know, some power between zero and, you know, some some negative tower is kind of the question. Sure. But that that applies to both instantaneous and cumulative oracles. But what about with regards to the EMR and the SMAC? Is that only applying the cumulative oracles in this case? So, I mean, you know, arguably both the SMI and the IMA, you know, can be used as cumulative oracles, you know, but not instantaneous oracles. They both could be used for instantaneous oracles, too. So. So then it's really two by two by three, the whole system. So there are like 12 options. Yes. Great. So could you maybe talk about EMR versus smart? Happy to, you know, kind of so getting back to into a moving average where, you know, kind of we're existing in this time T and the goal is to use the past, you know, in values and perform some type of aggregation or averaging over the values to determine some measure of central tendency in order to prevent any short term manipulation or outliers from impacting the value of the query in a, you know, you know, substantial fashion, while also ensuring that the Oracle is tracking the current price to the best of its ability. You know, it's it's a real fuzzy ground, as you know, kind of the best price is the price. Now, the instantaneous price. But the price now the price in this block at this time is manipulable. And, you know, the price at the end of last block is also manipulated by the price between any two blocks is manipulable. So therefore, there becomes this notion of now, because instantaneously all of these, you know, blocks might be manipulable. You know, how do we create some sort of estimation of the price now using historical values, using some historical aggregator to get rid of the outliers? Now, the estimate in the IMA both use some aggregation of, you know, the past and values, the differences in how much they weight each of the individual values in the estimate. All of the past and values are all weighted with one over n the SMA, you know, kind of is, you know, kind of very similar to, you know, the SMA just, you know, kind of weights them all the same. And thus, you know, the estimate really is just, you know, a basic arithmetic or a geometric geometric mean where you are submitting or multiplying together all of the values and then, you know, producing it based on the total number of occurrences or the EMI is slightly different in the sense that it's algorithm is given some alpha a and depending on how you define it, it can be, you know, a small value or a big value, but it's between zero and one now the average in time t the IMA in time t is equal to the IMA in t minus one times one minus alpha plus the new value times alpha. And again, it might be written differently such that alpha is on the pass term or you know, or alpha can be on the present term. But you know, in general there's some notion of, you know, take the last value, you multiply that by one minus alpha. Alpha is likely to be small. So one minus alpha is going to be about, you know, close to one plus alpha times the current value, which is going to be, you know, incredibly small. But kind of what it's doing is averaging together the previous value with the current value based on some weight. So what this means is instead of the value being sum, you know, weighted average of one over n of all historical values, the IMA weights the value. Now as alpha, the value last value as alpha squared, the value before that as you know, alpha cubed and etc. or I guess it would be it would be alpha times one minus alpha my back. So value now is alpha. The value last block is, you know, alpha times one minus alpha, which is some number that's just slightly smaller than alpha because one minus alpha is very close to one and alpha is incredibly small. So alpha times one minus alpha is going to be just less than alpha and the value to timestamps ago is going to be weighted with alpha minus one squared, which when you square a number, that's very you know, that's less than one but very close to one. It's going to result in a number less than one that kind of is, you know, slightly more or less than one. Right. Like you take point nine, nine and you multiply that by, you know, point nine, not you. I guess I don't want to do that in my head. But, you know, you get the point. So kind of the notion here is instead of all of the past and historical values being weighted as you know, the you know, as one over ten, the IMA takes in every value that's historical and weights it by alpha times one minus alpha, you know, kind of two to the end. And therefore there's like, you know, there's some rate of decay on the way of each point. So this, this is very constructive. So there's three axes on axes on which to consider the design of an oracle. It seems like the cost of adding any particular iteration of the 12 that we've, we've discussed along these axes would add additional gas costs. Is that correct? Yes. And the gas costs for each of them varies dramatically. And so can you talk a little bit about the the implementation of pumps that you've developed thus far? Which of these 12 iterations it supports and how composable the different iterations are? And you know, what needs to be chosen when. Sure. Yeah. I mean, maybe even it's is helpful to start with, you know, kind of the relative gas cost of each of these different 12 variations is so first off, you know, the between the the the geometric mean in the arithmetic mean the geometric mean requires performing some kind of uh you know, complex, high precision, exponentially ation that's quite difficult on chain or it can convert everything into log terms and perform, you know, and you know, kind of contain the logic of an arithmetic mean. And the latter is, you know, currently what is implemented and in the logic of an arithmetic mean or a geometric mean, it would you know, if you take the log of everything and apply an arithmetic mean and then you know exponential it it back out it's the same as in arithmetic mean the same as in arithmetic mean not a geometric mean is the same as a geometric. Yes. Apologies if you log everything and take an arithmetic mean that is the same as a geometric mean. So you're saying that there is a gas efficient way to implement it, but there's some loss. And I'm saying I'm saying there is a way to implement it. But, you know, you know, kind of complex, you know, mathematically context operations are expensive on chain and, you know, logging and power are actually, you know, not very cheap, especially when dealing in floating point terms. So, you know, kind of the log operation itself has a gas cost, I think of like 7000 gas or 5000 gas. So, you know, the exponential ation can probably be a couple thousand. So let's say that there's like a fixed overhead of like, you know, let's say 10,000 gas per value for using a geometric mean over an arithmetic mean. And you know, it should be noted that if you know, kind of if geometric mean is already being used, storing a second oracle that also uses a geometric mean doesn't require imposing that gas cost of logging, you know, logging in exponential heating in and out is not necessary. So now the next is, you know, first off, when it comes to a cumulative oracle using is that man or just gender, is that different from arithmetic mean meaning does it mean for the geometric mean equals the arithmetic mean plus one log operation and on right and one exponential nation operation on read. But but if you're doing multiple arithmetic or geometric means the gas plus does not increase. Yes. So for instance, the current implementation of the pump, you know, uses an estimate as a cumulative oracle and in Amazon, instantaneous oracle, the log, you know, they both use geometric means and the log operation. Well, let's just let's just let's just before we talk about the current implementation, let's just get back to. Yeah, but, but my point is that there are two different oracles here, an instantaneous oracle and a cumulative oracle, both using a geometric mean because they are both using a geometric mean and are contained within the same block of execution. The log on right only needs to be computed once and does this scale to multiple times. So you can take the the geometric mean across multiple different times. You have to define the time in advance. So this gets into kind of the EMA versus SMB and you know how it's chain gas costs, you know, kind of tie in to this whole thing. So for the SMB, you know, there are two use cases for an estimate as stated, it could be accumulative or it could be, you know, some instantaneous uniswap. The two uses a cumulative arithmetic estimate uniswap V3 implements an instantaneous geometric SMG, and that also serves as a cumulative geometric estimate. Now, it's important to note that if it's intended to use the CMA as an instantaneous oracle and a cumulative oracle, it's significantly more inherently in having an instantaneous say. There's a way to, you know, determine the cumulative over time through its implementation, which kind of we'll get into now. So if using just the cumulative estimate, you know, it's required to, you know, only update some running sun and kind of in order to calculate some estimÃ© on chain, the way it's normally done is by taking the current cumulative of some statistical variable minus the cumulative statistical variable 30 blocks ago. Subtracting the difference between them in dividing by that. And I guess to get into why you even have to do that is, you know, so what is the estimate? It's some additions of the balances over the last 30 blocks. Let's say the balance was updated in every single one of those 30 blocks in order to kind of submit each individual occurrence. When you read when you read from this estimate, it's now saying, you know, read what you know, you have some array, which is like the balance at every single block and it's saying, read the last 30, add them together and divide by right like that would be some you know simple implementation of some smart now on chain storage storing one occurrence is expensive that costs 20,000 gas to store a single value. Now what this means is that, you know, kind of the cost of every time someone performs a swap would increase by, you know, 20,000 or, you know, per value or maybe, you know, it depends on how you group them. You know, there are some gas optimized ways to store multiple variables in a single slot. But, you know, generally it's around 20,000 per balance and that's relatively expensive. On read, you don't have to read every single one of these variables. And let's there's 30 different variables. The cost of reading is, you know a bit over 2000 gas. So reading 30 variables is, you know like 65,000 gas about which is incredibly expensive. So instead of using this format where you save every value and then read the past and and add it together a, you know, basic cumulative, you know, estimate arithmetic, oracle and, you know, again, the relative difference in arithmetic and geometric is minimal. Here, only the additional cost of that log operation. But, you know, kind of instead of storing each individual occurrence, it stores the summation of X over time. So it stores exact time equals zero plus exit time equals one, plus exit time equals two plus x time equals three, etc., etc.. So if you snapshot that, let's say at time end -30 and in time and you want to determine the estimate over the last 30 blocks, if you have saved the cumulative time N -30 and you have and you know the value now you subtract, you know, the sum at, you know, time and you use it, you subtract the time it, you know, you subtract the cumulative time in -30 from the cumulative at time end. You get, you know, the value at time end -29 plus and -28 plus -27 plus, you know, etc., etc.. So the point is you can through the use of sum cumulative by snapshot, you get at some point in evaluating it. Now you can with only reading kind of two data points, determine the sum of all of them. Now, what's important to realize this gives you kind of one storage variable that just needs to be updated on each a cushion so you get an update instead of a create, which is five times cheaper. You know, actually more than that because it create also cost to read. So a create is around 22,000, you know, so you get instead of a create you get an update and instead of having an read operations, you only have to read operations. And you know, that's essentially what led to the creation of the that is how Uniswap V2 Oracle worked, how Curve's Oracle works currently and the Oracle that pretty much Bienstock uses now. Uniswap V3 took this a step further and wanted to create an instantaneous oracle. The difference from a functionality perspective between accumulative and instantaneous Oracle is that with the cumulative oracle you're required to snapshots and start. And with the instantaneous oracle you don't need to snapshot the start in in this, I guess in this, in this implementation of it. You know, kind of so what Uniswap V3 did is they basically said we're willing to impose that cost of 20 K per swap to allow people to say, okay, to get rid of the requirement to snapshot the start time. You know, in when Beanstalk was initially deployed, you know, it didn't use a flash flood resistant Oracle for anything. And when it was first realized that a flash of resistant Oracle was necessary, you know, it required implementing some sort of BTV Oracle for the Uniswap v2 LP token price and Uniswap V2 uses this cumulative method, right where it's like some snapshot. And then over the snapshot you take the average. Now it became it was extremely difficult to try to create an oracle that inherently is instantaneous in nature, meaning when you deposit, you need to believe. Now. And what this means is that there's no fixed is that you're not able to snapshot the start for every single timestamp where in any single block someone could deposit. And in order to determine, say, the one hour estimate using Uniswap v2, it would need to know this start timestamp an hour ago. Which means that, you know, it's inherently required to know the start timestamp at every single or you know what the cumulative was that every single time. And so you know ultimately the implementation and you compare that to something like the sunrise, the sunrise knows it's not going to need to read the cumulative until the start of the next season. And at the start of the next season, it can just use the snapshot that happened during the Sunrise call prior take and take the average over that time period because there's fixed control of execution at both the start and end of every time frame. It can use the snapshot method. But when it comes to something, you know, because it knows it happens on some fixed schedule, you know, and in reality, Bienstock is quite unique in the fact that it even has this type of Oracle to begin with. You know, there are a few other protocols that are operating on some, you know, fixed time frame. And thus very quickly, you know kind of for any protocol to use Uniswap V2, it becomes it becomes almost an impossible task. You know, the solution that Bienstock Holtzman ultimately ended up using was it would take the average so far into the season. And if you tried to deposit uniswap LP tokens in the same block after a sunrise, it would actually fail. And you know, this was quite a mess, as you know. It would either be incredibly expensive from a gas perspective to start, you know, to remember the last one, you know, it would change another cost of updating to a cost of a create or two updates. And, you know, so therefore, you know, kind of it was there was no fixed look back on the Oracle. It was basically how long has it been since the sunrise take the average over that. And so one can see how very quickly the cumulative oracle, you know, is not that useful. And So Uniswap V3, you know, take the kind of solution in between the first one where you count all of the individual values. Some of them, and divide by end. It used instead of storing the value at every single x, it stores the cumulative at every single x. So, you know, kind of reading or writing, you know, increases the gas cost by, you know, 17 K But on reading it can only increase the gas cost by log in, you know, kind of by log in which is you know when went in into sufficiently small, you know, probably less than 10,000 gas. So, you know, kind of it's able to make serious increases in efficiency over the over the cumulative oracle. But the smaller is still incredibly expensive when used as an instantaneous oracle, the EMR only, you know, similar to the cumulative the each iteration of the IMA, each time the EMR steps forward, it only requires one on chain update very similar to the original cumulative because it just requires, you know, adding, you know, kind of because this version equals last version times some variable plus the current version times variable. It doesn't need to know anything about the historical observations at all. However, to use it, you know, as a as a variable cumulative in a sense that there's also some notion of, you know, how far can you look back? It is the look back fixed. The main tradeoff between, you know, one other tradeoff between the AMA and an estimate is that in an SMA, you have a variable look back in this uniswap v in the Uniswap V2 implementation, you can choose to snapshot at whatever start time you want in the V3 implementation. You can kind of you know, you can always look back further as you have a list of the cumulative every time it's changed, you know, up to some max variable, you know, where Max, you know, Max is, you know, reasonably high, maybe a day like a look back such that no one would want to look back past. Yeah, you know, just just to impose some some restrictions on the upper bounds of computation time. But, you know, kind of the the the point is that there is a variable to look back such that if Bienstock is using, say, a one hour look sma for the BTV to determine the BTV of a token, then it wants to change that to say 30 minutes instead of an hour. Then it just has to change. The parameter calls the estimate width. But again, it's an expensive call and it's an expensive right with the EMR, alpha is the variable which ultimately controls how far the lookback is. If alpha is, say, 0.5, the mean is weighted 50% by the the newest value. If alpha is zero point the the newest value is only 10% of the mean. And so by having a smaller alpha, the lookback is greater, but it's required to know alpha ahead of time. And if an oracle gets deployed targeting a one hour EMA, then and you know the you know, the Bienstock dial wants to change it to from an hour to 30 minutes. It would require deploying a new liquidity pool, which would and somehow they'd have to tweak incentives to get our peers to migrate into the new liquidity pool, which uses the new pump. And, you know, it's a lot messier, as you know. So kind of, you know, an important note is that the EMR oracle has a fixed look back and, you know, the same Oracle used for cumulative or instantaneous queries has a variable lookback. So I didn't realize I was muted. All right. So maybe we're running out of time here, but maybe just to frame the things that people should be thinking about heading into this week of discussion, can you talk about the current pump implementation and what types it supports? Yes. So the current pump stores a Uniswap V2 kind of style SMA, but it uses a geometric mean. The geometric mean was chosen based on how geometric mean reacts to outliers. You know, as stated earlier, because operating in log space when you know increasing or, you know you know the it you know takes in outliers into account less than the arithmetic mean but an important consideration is it behaves very extremely around zero. So you know as you know so there probably will need to be some limitation set to be so the minimal the minimum Oracle value which personally personally feels like has very small effect in practice. As you know, it's likely that, you know, the wells wells will maybe lock some liquidity to start similar to how uniswap pools do so that, you know, dealing in such streaming of situations ever happens, you know, but basically, you know, so there's a geometric cumulative sum that's intended to be used for the season call where every season, you know, being cycle snapshot, the total, and then take the delta over the range, you know, and then use a M-A for instantaneous value queries, you know, with some, you know, that does have the implications of a fixed look back but you know, kind of and so they'll need to be a decision made by the Bienstock Dao on you know, kind of what what is the lookback that is anticipated to be approved for whitelist in the case that it is and you know but it you know provides gas efficient rates and reads for instantaneous value queries as opposed to the estimate variation. It's also important to note that, you know, neither of none of the types of oracles discussed today natively have any protection against multi block MF. And there is a variable cap capping the percent increase across a block to some fixed percentage to in order to minimize how, you know, the magnitude of outliers created by, you know, you created by manipulators on the Oracle. So there's other parameters in addition to these axes for Oracle design that need to be considered. So maybe just to try to understand the pumps that have been built so far, you said there's a geometric cumulative sum. Is that a that that means that it's a it's an estimate for a geometric sum to be used over time, and that has a variable lookback. So that can used for minting over in theory, any period of time. Yes. But it requires control over the execution of execution at the desired start. Look back to snapshot, the cumulative grade, and then this Oracle or the pump also has an email that supports instantaneous value queries as an arithmetic or a geometric mean. Also as a geometric as noted, you know, kind of the increased gas costs for having multiple geometric means in the same block of logic, you know, helps reduce the gas cost of storing multiple geometric means. So the pump has a support for two different geometric means that are being calculated. One is a cumulative of SMI and one is an instantaneous VMA. Yes. Go ahead. Go ahead. Yeah. I mean, it also stores the cap, the balances of the pool so that it can determine what the new cap is. So it should be noted, you know, kind of the cost of, you know, capping the value as it changes per block is relatively expensive. Probably looking at something like, you know, 5 to 7000 gas per iteration. But, you know, multiple MLP attacks are quite, you know, are you know, are very you know, are you know, are a threat to oracles. And it's important that they're resistant of it. So, you know, kind of each oracle is required to store the last capped balance in order to kind of set you know, determine what the maximum increase in decrease that should be tolerated. That's right. Well, we've gone for an hour, feel like we could definitely go for at least another hour, but we'll keep it to the hour, limit the classes booked for maybe we run it back for next class or even book a bonus class in the next couple of days. But thank you very much, sir. Thank you. You know, hope I hope the majority of what has been said is coherent. You know, apologies for not being able to, you know, always can, you know, say things in a in a concise, efficient manner. I felt like you did a fabulous job. Don't be sorry, sir. Thank you very much. Thank you, everyone.