- No clear definition for scaling. It has been used in a number of different ways.
- Toghrul describes it as a system which increases throughput without increasing hardware requirements. Because you could scale a system infinitely just by increasing the hardware requirements, but then you’d have fewer and fewer people who are capable of running the nodes.
welcome to the bean pod a podcast about decentralized finance and the Beanstalk protocol I'm your host Rex before we get started we always want to remind everyone that on this podcast we are very optimistic about decentralized Finance in general and Beanstalk in particular with that being said three things first always do your own research before you invest in anything especially what we talk about here on the show second while you're doing that research try to find as many well-developed opposing viewpoints as possible to get the best overall picture and Third never ever invest money that you can't afford to lose or at least be without for a while and with that on with the show [Music] one of the primary goals of ethereum is scalability indeed a world computer is only useful if it's able to manage and record an ever-growing number of transactions and computations earlier this month the ethereum network completed its much anticipated transition from proof of work to proof of stake along with other benefits the merge is promised to make the network more scalable by removing the congestion created by proof of work consensus mechanisms but that's not to say the proof of stake has been the only scaling solution under development ethereum's Layer Two blockchains offer scalable speed and capacity by performing transactions off layer 1 then bundling and recording those transactions back in the mainnet the underlying mechanisms of these L2 Solutions known as Roll-Ups generally fall into two categories optimistic and zero knowledge to learn more about the latter mod Publius and I are spending this episode talking with tagrill maharamov a senior researcher at scroll scroll is a ZK evm based roll up and together we're going to discuss what that actually means and how this project can help make ethereum faster and more robust without compromising security or transactional validity to gruel thank you so much for joining us on podcast hello thanks for having me on and a big welcome back to both Publius and mod thank you for having me hey Rex thanks for having me as well great to have both of you so togaro how about you kick us off by kind of setting the stage around scaling and zero knowledge proofs okay so uh let's start with scaling first so there's not really a clearly defined definition of what scaling is people use different different ways to describe what scaling is what I would like to describe scaling as is a is a system which increases the throughput without increasing the hardware requirements so essentially because because you could scale the system infinitely just by increasing the hardware requirements but in that case you just have less and less people who are capable of running the nodes and the question is how do you scale the system without uh increasing the hardware requirements because naturally it sounds like the only thing you can do is just crank up the settings and process more transactions and there are multiple ways to do it so the original way to do it was shorting which ethereum first pursued from like 2016 or 2015 and that's when you split the blockchain essentially in a bunch into a bunch of chains that are run in parallel and just communicate with each other through the same protocol in ethereum's case the security would have been shared because essentially the people the nodes who are running the those shards were the same as the nodes who were running the main chain but not all of them would run every single charge and somewhere would be allocated to different chains but that has quite a lot of different problems in it because essentially because you split the committee size this into smaller into smaller sizes you get less security Etc and it's also quite problematic to compose and that's when uh off chain scaling came and so first it was lighting Network and payment channels then we got plasma which was the original version of how you would change the scale ethereum via off-chain processing but that had also a few issues specifically it had a thing called Mass exit problem it's word you essentially attempt to exit the chain fraudulently without providing the data so nobody can uh can basically challenge you on what what the exit is and then the first zero knowledge roll up was born so the paradoxical and the funny thing about serial knowledge roll ups they're not actually privacy preserving they just use zero knowledge proofs but essentially like if you take scroll Stark net any other zero knowledge roll up it's not actually privacy preserving but the the cool thing that the modern zero knowledge proofs have is something called is a property that we call succinctness it basically it's basically a property that allows you to verify uh the transaction in less amount of time that it would take to re-execute it or or a certain computation doesn't have to be a transaction but in that in our case it applies to transactions so it makes sense to use zero knowledge proofs as a way to process transactions of chain and then just put the zero knowledge proof on chain and for the smart contract to verify it and say oh yeah great everything is fine and uh what was also interesting is that a few years ago it was impossible to do that for something like evm because the computational overhead of doing that would have made the throughput negligible like more less than one TPS even if it was technically possible but thanks to a lot of breakthroughs that happened in the past few years we're now at a stage where we have working systems live on ethereum that allow us to process arbitrary transactions of chain and that's how we got here zero knowledge proved specifically what is the opportunity you hinted at it there just a second ago you know about that succinctness talk us through zero knowledge proofs and the the opportunity that they provide so zero knowledge proof is a cryptographic system that allows you to prove they that you can satisfied a certain function or an algorithm without actually revealing what data you use to say satisfied or re or only revealing part of the data so let's say you go to a supermarket and you buy some beer and the the cashier asks you for a proof that you're over 18. so if there was a centralized database that held all the IDS and all the information about the IDS you can essentially prove with the zero knowledge proofs the year over 18 without revealing who you are or what your name is etc etc or your social security number etc etc and the cool thing about them is they've been around for more than 40 years I think the first paper about no a bit less the first paper was published in 1983. and actually two of the people who published it currently work in crypto it's Sylvia McCully who works on algorand and Shafi goldfoster who's an advisor at starkwell and um basically at first it was a system and interactive system where basically I was let's say I was trying to prove something to you and it was an interactive system when you would challenge me and we would go back and forth until we were convinced But Then There came up a certain scheme that allowed you to transform an interactive proof into an non-interactive proof where I could just prove it to you without you having to challenge me it's called a fiatrimere transformation and uh that's how non-interactive zero knowledge proofs started and from there they just kept improving and improving until we got to snarks snarks stand for stand for succinct non-interactive arguments of knowledge so and it doesn't really matter what the rest of the words mean the only thing that matters in here is that they're non-interactive they don't really require you to interact with another person to prove that you're correct and secondly they're succinct which what I already described and you could theoretically use them to prove anything so even an enormous amount of computation can be proven in a very small amount of time so let's say a plunk proof takes a few milliseconds to verify and that can prove let's see gigabytes of processing and hours of processing but obviously that would be very expensive to prove so we use it on more viable things and we applied the whole scheme to use it for for processing transactions and proving the correctness of of the fact that you proved that you executed the transaction correctly yeah so what you're saying is to go back to your your beer analogy for the sake of zero knowledge proof it's the same amount of let's say um information that's provided outwardly to prove that I am over 18 as it would be to prove my entire Financial history despite the fact that you know those two questions have very different answers in terms of the amount of actual information used to compile whatever that answer is from an outward facing standpoint it's just it's yes I'm over 18. or yes I qualify for whatever you know Financial requirement there is it's just that it's that very simple outward facing component so it depends on on a concrete proof system so there are some snarks and Starks that are not constant so the more computation it takes to prove the more computation it takes to verify but it's typically shuffling here so let's say if it takes five minutes to prove something versus a minute to prove something it's not going to be one second versus five seconds it's going to be more like one second versus one and a half seconds uh but but some are near constant so I don't think there's any proof system that is completely constant because there are there are certain things that that will depend on but let's see if if if the number of public inputs and by public inputs I mean the inputs into the functions that are not hidden so so essentially zero knowledge proof it takes two types of inputs one which are public inputs and those are the inputs that we share so let's say you and I both know what there are and then there are private inputs the ones that I don't reveal to you so you can use the public inputs to verify it but you can't use my private inputs those are hidden and in most of the snarks for example the the systems that I described the proof system that I described the more inputs you have the public inputs you have the the the more expensive the verifying is but it's going to be negligible in terms of how much the difference so you can consider it to be constant gotcha so let's transition more specifically to scroll so how does scroll fit inside of this uh zero knowledge proof part of the scalability ecosystem so uh first of all let me just describe oh let's crawl this so so people have a rough understanding so scroll is a zero knowledge roll up that uses a snark uh to prove the correctness of the execution that was done of chain and the scroll is specifically ztvm based and by ZK VM I mean that our proof system can prove the computation that was done an AVM so let's say you can take the it's a bit modified so we can't really prove the correctness of ethereum to transactions but but those modifications don't really affect the code the smart contracts and they are just efficiency improvements so ideally so you could potentially just remove the modifications that we've done and prove the correctness of ethereum's transactions with our with our system that we use and uh so essentially what we do is you would send us a certain number of transactions we would compute them execute them prove their correctness and then commit them to ethereum which basically means that they're finalized without ethereum having to re-execute all those transactions I see so there's a lot of talk about bridges and other similar systems to help manage the relationship between L1 and L2 how does scroll relate to systems like optimism or some of the other bridges that are that are out there the firstly uh it's a bit of a misconception that Roll-Ups uh there's a lot of misconception about how Roll-Ups work etc but Roll Up is essentially a side chain that has a validating bridge that connects it to ethereum and what I mean by validating Bridge it's the bridge that verifies the correctness of the execution that was done by the side chain in our case we use zero knowledge proof so it's explicit you can only finalize the certain block if the execution was correct in case of optimistic pull-ups it's implicit where you assume that in case nobody challenged the commitment within a certain period of time that means that basically it is correct and obviously there are a few more nuances so for example you couldn't just connect Solana to ethereum and call Solana roll up because there are a few more properties that a rollup needs to have so for example a roll-up needs to inherit this censorship resistant guarantees from the underlying beige layer so in this case ethereum so let's say Solana if Solana sensors your transaction you can't force the inclusion of your transaction through ethereum whereas in a row of you you should be able to do that so there are certain nuances that that basically make rollups unique but overall the goal is to have a bridge between ethereum and a Roll-Up it essentially Roll-Ups are the the most trust minimized way you can communicate between two different chains and we obviously optimistic rollups have a bit of a different trust assumptions but of all both of them are quite minimal minimal so indication of of an optimistic roll up you just need one honest node that is capable of challenging the the proof within seven days or depends on how you configure it whereas for us you just need to trust that the zero knowledge proof system that you're using is secure so when I think about use cases what are what are some specific use cases if I was so if I was looking to utilize scroll how would I implement it where would it be most useful walk us through a little bit of a an opportunity there so I I have an article coming about it and basically there are the Roll-Ups on ethereum can be used for two different reasons either they extend the throughput of ethereum so let's say the demand currently the amount for ethereum is more than what it can supply in terms of block space and secondly it's to extend the features so for example something like starknet adds a new VM to in a trust minimized way to ethereum so you can use Caro which is a completely different system to evm and you build different smart contracts there we fall into the first category so our goal is to extend ethereum's throughput we don't really add a lot of new features we we are looking into a few minor ones but the idea is that from the user's perspective the difference between using and and developers as well so from the users and developers perspective the difference between using a theorem is unscroll is going to be unnoticeable so in an ideal world you would be interacting with scroll and you wouldn't even know that you're interacting with scroll rather than ethereum and so essentially we increase the throughput but aside from that uh ethereum because ethereum is quite Limited in terms of how much computational Cycles it can provide so essentially I can't have a transaction a smart contract that requires you to spend 10 million gas because you would be paying thousands of dollars per transaction for it whereas on roll ups because of the stone-off chain it should be relatively cheap and so you could have uh different applications that weren't really possible to build on ethereum because of the cost that it would take to use them for example something like uh an order book exchange for example you could potentially build it on scroll and it will be quite cheap to interact with whereas on ethereum it would probably cost you like hundreds of dollars per transaction gotcha so I wanna hand it over quickly and give Publius and Ma just a minute I don't know if there's anything that you guys wanted to ask about or talk through I still have a couple questions left but uh I want to want to hand it over to the two of you for a minute yeah I wanted to start with a question um from my understanding from what you're saying is that Scrolls objective is mostly scaling and not privacy is that correct yeah yeah and and assume you're on your website you have that roadmap across different phases can you maybe take us through it and explain to us what each phase is for or you know what what do these what do these phases mean so um phase one is dkvm proof of concept and what that means is we have a minimal viable implementation of ZK VMware functions So currently we can prove that transaction that a transaction was executed correctly but we still can't prove all the op codes so there are some op codes that are basically do you trust us that we can prove but because we're we're in a test net phase currently it doesn't really matter so it doesn't affect the security of your phones and then phase two is going to be ZK VM test net so it's a fully fledged ZK VM implementation that can prove all the transactions and uh all the op codes and you don't have to trust us in any way so you're essentially uh send the transaction off we prove that it's correct we put it on the test net and it's fine and then phase three is proof Outsourcing so what we have come up with is that you could essentially scale the throughput of the product increase the throughput of the protocol by just parallelizing the proofs so what that means is let's say you produce 10 blocks in a chain instead of computing the proof sequentially you can Outsource the proof uh proof computation to 10 different nodes or in in our case we call them rollers to compute the proofs for them and what it allows you to do is essentially in the same time that it would take you to compute one validity proof you can compute 10 proofs for different for 10 different blocks and essentially we have a system which would allow us to scale with the number of Proverbs that we have in the network and that's what it means by layer to proof Outsourcing and the key VM testnet is essentially once we have all the kings and all the bugs sorted and the implementation audited and everything like that will finally launch on the test net where you can use your real funds rather than imaginary test net funds and phase five is decentralized sequencer so uh so decentralized sequencer means that instead of a single entity building the blocks you would have a permissionless system where anybody can be able to build blocks the same way you can either ethereum just run the valid area and you can build the blocks but the the problem with l2s and roll up specifically it's quite complex to design a system which is decentralized because you don't really want to introduce the overheads because you're already paying ethereum essentially for its consensus and you don't want to just add another consensus on top of it it's just a bit wasteful so how do you decentralize the protocol without adding the consensus and keeping the overhead to the minimum so it's currently in active research we have some ideas so it's not something that is impossible to solve it it's quite solvable it's just the the problem is that how we need to perfect it in a sense we need to make it as efficient as possible before we launch it so that will be the last phase of the current roadmap and and right now you're not phase two correct so the phase two should be coming soon so currently we have the ztvm and and soon we're gonna have to test that fully functional a public test end and without adding any pressure when do you think the phases will take long or when will you know normal day-to-day users be able to use scroll so you can we currently have a pre-alpha tested live which basically is a permission that's not where you can sign up with on our website and we we let people in batches to interact with it so what you can do is we forked ethereum because we don't really want to clock ethereum and basically you you can you can interact bridge between the fork of ethereum and a roll up and also we Fork the unisop and you can use uniswap and swap between the test net tokens but the full test net which will be public meaning that you don't have to sign up anywhere or you don't have to ask our permission or anything should be coming soon I don't really want to commit to an exact date because you know how dates work and crypto if you commit to something you're just guaranteed to for if for if to have a postponement so yeah but it should be quite soon probably is anything you want to jump in with I even thought of a quick question as we were talking but I want to want to give you the open mic for a second I appreciate it Rex um would love to ask you kind of what the difference between Starks and snarks are and why scroll chose to use Starks and did they plan on using Starks forever or um you know what is the three to five year plan for kind of where Starks and Starks are going uh so the difference between snarks and Starks was was that uh basically it's in in the stars name so uh Stark stands for scalable transparent arguments of knowledge and what they were is transparent so with snarks originally you had to have a trusted setup what it meant is you have like to have a certain um cryptographic system where where you would have like different computers participate together to compute the proving key and the and the verifier key and what that means but then that system at least one of them had to be honest so you had some Trust whereas right now there's some snarks don't need that anymore so for example the snark that we're using is Halo 2 it doesn't really need that it's transparent but we still refer to it as snark as a snark because it's based on us on a snog that wasn't transparent even though there is an argument for the fact that a transparent snark is a stark So currently there's not much difference between a lot of modern snarks and Starks but originally the difference was that stocks didn't really have to have trusted setup thank you um and kind of you know I see in your roadmap here um you know kind of the vision you know post mainnet is as you mentioned a decentralized sequencer and a more efficient evm when you say more efficient evm is that improving kind of uh you know is that improving the actual run time of the um you know the Stark uh you know operations it's or the snark operations itself or is that in actually you know being able to com you know process the evm you know implement the op codes in more efficient ways uh both so you could optimize the circuit and also the process that you used and also there are a few ways where you can optimize the execution time so for example uh we have been looking into and also we're not the only one that's like a few a few people are looking into it it's uh is this is an approach to education called optimistic uh concurrency the parallelization so what that means is you can basically execute multiple transactions in parallel optimistically and assume that they don't touch the same storage State slots and if they do touch the same slots you just revert to sequential execution the way it works now and somebody uh called Brock uh I think he works at nonsense or already did a proof of concept of that a year ago and I think in his rudimentary test he got a 5x Improvement in terms of execution performance so that's one of the things that could be improved outside the zero knowledge stuff and also there are a few other things like the way the state storage Works statelessness Etc that we're also looking into that's incredibly impressive really crazy to hear uh you know kind of what's on the horizon for sure um so you talked about having outsourced rollers a minute ago and are there any concerns with working with Outsource rollers and and the when you talked about taking transactions and and moving them from sequential to parallel um it sounds like there's some agreement risk in that and and do you have any any concerns around that and how do you manage them uh the only risk in in execution parallelization is that you can potentially expose yourself to the dose attacks when essentially a lot of transactions try to touch the same state without denoting that they're trying to touch the same state so a similar thing to what happened and so on a few months ago where it went down because there was an nft drop and a lot of transactions were trying to touch the same storage slot and basically the the notes crashed because it wasn't because of that but it was the result of that and that's the only concern but you could you could mitigate that by price by by basically setting different prices on basically assume that let's say if you denote what slots you're gonna touch inside the transaction and then uh when you execute it turns out that you touch an additional slot the cost of touching that additional slot and storage should be priced differently to what you price the slots that uh that you you actually denoted and so you could we're thinking about basically pricing them in a way where which disincent device is this attacks like that so it can only happen uh because of uh uncertainty so let's say if I'm interacting with a smart contract and I don't know what the outcome is going to be so I don't know what slots it's gonna end up touching in that case that's fine it's only problematic if it's a targeted attack on a specific slot one more question um whether on the rollers uh will they be able to be uh you know trustless and kind of permissionlessly deployed yeah they should they will be permissionless that's incredible um is there will there be any incentive to be you know run a roller yeah so uh we're currently looking into a model similar to what ethereum is going to deploy eventually which is called proposal Builder separation where you have a relatively centralized set of uh notes that have like quite expensive Hardware that can compute and extract the maximum Mev they can from from a batch of transactions and then they bid and essentially what they do is they compete with one another by bidding to the to the leader of the round of the slot in ethereum to be picked as the ones to be inserted into the block in into the uh consensus block and so uh essentially that allows you as a proposer the leader of the round essentially oh the slot to collect med without actually MAV profits without actually extracting MAV that work is done by the Builder so in this case it will be similar so a sequencers or sequencers will will extract the me and then they will share the profits with the rollers and also obviously transaction fees are the other part of the equation so kind of you know when you know this this plan as we've described you know before we decentralize the sequencer but after we have kind of the rollers in existence you know users will be submitting transactions to you know kind of uh the sequencer entity which will then basically say you know I'm proposing these blocks the rollers were kind of all compete to kind of win the bid in order to process that block you know which will take some amount of time and then the process the proposal is you know continually to build more blocks on top of that oh sorry I was describing a scenario where both the sequencers and and the and the under rollers are both Central or decentralized so essentially how how it will work is you select let's say five sequencers per also bear in mind it's an active research so we might change the model completely by the time it's released so let's say you select five sequences per slot and they compete by out bidding each other and there's is one roller who is responsible for picking the highest bid and that's how it will work in a centralized scenario it's relatively easy what we could do is something similar to what optimism does is uh but optimism doesn't have rollers so they don't have to incentivize the rollers so what they do is all the extracted Mev goes to public goods funding because the sequencer is centralized and operating by them and they don't really want to keep the profits from them maybe extracted and what we could do is we could essentially uh or give all the extracted Mev profits away to the rollers that are Computing the ability proofs and that would essentially act as an incentive prior to decentralization of the sequencers awesome so I mean in the case you just described the sequencer or you have kind of n rollers in one sequencer yeah and then the secret okay awesome but in the fully fledged model um well the where the rollers themselves kind of become the proposer or is there still kind of two separate roles where there's like you know a group of block you know sequencers in a group of rollers in the second model essentially the the sequencers will be the builders and the rollers will be the proposers yeah thank you so much for you know walking us through that really excited uh you know here where further research goes on that end this is all you know incredibly insightful for all of us so thank you so much no worries of course you're more than welcome so two questions that kind of linger in my mind whenever we're talking to any project the first is security concerns it's it's something we can't get around right now and so when when you think about security concerns for scroll what what goes through your mind what's what are you and your team working on to manage right now the bridge the main concern I think for any roll-up is the bridge and how do you minimize the probability of bugs happening on the smart contract level and then if we're looking forward like a few years forward after let's say everything has been battle tested and we know that there are no obvious bugs in the system how do you decentralize upgradeability because especially for our project obviously if if you choose to take a different path you don't really have that issue so let's say few labs when they launched a field vivon they didn't have upgradability because they were like yeah if if we launch another a second version we'll just deploying your contract but we can't really do that because our goal is to be evm equivalent so in case something changes in ethereum we need to be able to keep up with it and therefore it has to be upgradable all the time and how do you have an upgradable system that is also trustless and decentralized that's probably long term the biggest problem that we are looking into and we're trying to solve short term it's going to be the bridge and I think what we're gonna do it's not really finalized yet but I think the approach is going to be to emit the deposit it's uh initially so how the start net works is every day they just increase uh the deposit limit by a certain amount and that guarantees that in case something goes wrong you don't lose 5 billion but instead you you lose a few million which is a problem but it's not a catastrophic problem which can basically threaten the future of your entire project that's really insightful and I mean I feel like that is something we're hearing about so frequently right now Bridge exploits or bugs and uh I think that's that's pretty insightful in terms of looking at that near term that near-term threat the other the other thing that I you know I love to ask about is the future and you've already hinted at you know future capabilities anything else you want to cover in terms of what the project is looking at in terms of either capabilities or Partnerships um new features anything that comes to mind l3s which are Roll-Ups deployed on top of roll-ups because I I think uh at this point it's clear that a single Roll Up is incapable of handling all the potential demand it might have so we would need a way to scale out even more and for a lot of applications you don't really need compatibility so let's say if you're building an nft based game you don't really need that composability where you have to interact with different applications Etc you can just be in your segregated uh system somewhere else just infrequently interacting and bridging so let's say if you need to add funds Etc and for stuff like that we're working on l3s and how to make all three is viable because you can just deploy a roll up on top of a roll up it's fine for some use cases but for the majority of the use cases you don't need you really need that level of security so what you can have is a validium which is essentially a zero knowledge roll up that doesn't post the data on ethereum but it's then instead uses some other data availability solution to store the data so let's say you can use Celestia or you can use something else to store the data or you can have your own proprietary solution proprietary solution to store the data for your L3 and so yeah I think I think the future is going to be about elf freeze and how you make the deployment of them easy so essentially ideally what we would like is that you can just with a few clicks of a button launching your authories and you don't really have to deal with bootstrapping the sequencers everything and all those problems that you shouldn't really bother with as an application developer you should just bother with your own application and so we're working toward towards making it a reality but that's a long-term thing let's get them scroll on the main net first before we we start start basically launching the hey nothing wrong with looking three steps down the road that's really exciting I mean I feel like we're just getting to the point where we're having more and more conversations about l2s be looking at l3s that's um I like that that's that's that's the future hey Rex do you mind if I ask one more question here so why are the Starks you use not privacy per serving it's it's it's not because they're not that they they can't be there's zero knowledge oh I just want to know we have removed this your knowledge component of our proof system because it adds unnecessary overhead and because we're not privacy preserving it we don't really need that but you can potentially return it back and it's not going to be an issue but the problem is how do you do general purpose computation that is privacy preserving but simultaneously efficient it's not really something that is quite common at the moment so you could so for example secret Network do it but they they use a trusted Enclave so in Intel sgx to do it and we don't really want to do that and also because our goal is to be a similar to ethereum as possible ethereum is not really privacy preserving so we're not really concerned with that on an LT level on an L3 level we might build some privacy preserving version of scroll or something along those lines but on an L2 level our goal is to be just extension of ethereum that allows users to transact and interact with ethereum in a trust minimized way without paying an arm and a leg for a transaction I had one more question on my end so talk about you mentioned that on the test net you have gonna swap as one of the supported maybe apps that it can work with and I understand as well with other scanning solutions they have certain apps that they support is this how it's going to be in the future that whenever there is an app that needs to be for example supported or worked through scroll that scroll has to support that app or it will be that anyone can you know interact in the future we'll scroll through any any other any other app yeah uh it's a unit small Fork by the way it's not even a swap just just uh so uh essentially um no you you don't it's a permissionless system it's the same as ethereum you just have to pay the transaction fees to deploy on it on scroll and that's it you don't really have to ask our permission or partner with us or go through us or even talk to us or know who we are you just the same way as a developer on ethereum you don't really need to know who vitalik is or what ethereum is you just need to press a button deploy and upset your application as an ethereum so no it's just right now because we don't want to overwhelm the current implementation that we have and also the servers that we built we just want to take it step by step where we first of all firstly don't allow people to deploy their own contracts just to see how the system works with our own contracts and then once we make sure that everything is functioning as intended we will open up the system so anybody is free to deploy their own contracts and do whatever they want so to grow up I want to turn it over to you is there anything else you'd like to share any closing thoughts anything we haven't covered that you want to talk through the only thing that I can add is that it's going back to our to the question that you asked previously is I think that in the future this the the a lot of uh a lot of the the user interactions and like a lot of our security concerns are not going to be focused on specific blockchains so let's say ethereum Salon Etc it's more going to be focused on the bridges and so I think in the future we will stop thinking in in a way oh yeah it's ethereum we're gonna be thinking in a way oh it's a nomad Bridge or oh it's it's it's a scroll Bridge or whatever and yeah it's just an interesting thought because I think at this point it's pretty clear that there's not a single chain that can facilitate all the demands of the users and so essentially we are in a multi-chain future where multiple chain will exist in their parallel universes and have their own ecosystems Etc and the main concern because that's the thing that is the easiest to break are going to be the bridges rather than the protocols themselves so I'm just going back to kind of the the part where you said it's a there's a function with private and public variables is it that the cost of storing and using private variables is way more expensive than having public variables or what I'm just trying to conceptualize why the Privacy is so is more expensive or more difficult but the private variables don't mean that it's actually private it just means that essentially what you do is let's say you execute the transaction and you have the public inputs into the you compute the proof and then you have public input so you say the previous state the hash of the transaction that you published etc etc that you executed but all all the steps that satisfy the execute the correctness of execution of that transaction so let's say at this point you're after I don't know three steps your stack was just down the the the the protocol that is verifying it doesn't have to know about it so you it's basically just an additional overhead in terms of storage and and latency the is not really necessary so uh we hide it but it doesn't mean that you if you get the transaction you can't recreate the same input so it's not privacy preserving where you can't recreate something or extract the data from the uh from the original transaction it's just we hide the intermediate state of the transaction because it's unnecessary for you to know it when you're verifying it thank you so much for explaining that no worries well this is fantastic to grill thank you again for joining us thanks for having me it was fun absolutely and modern Publius as always thank you for your time too thank you Dex thank you both you know everything that was said here today was incredibly insightful and you know truly learned a lot so thank you for taking the time to you know come talk with us today you can learn more about scroll on their website at scroll dot IO and find both the project and to grill on Twitter [Music] the bean pod is a production of Beanstalk Farms a decentralized autonomous organization you can find us on Twitter Instagram medium Discord and our home on the web at bean.money you can also find me on Twitter at Rex the beat and as a final reminder this podcast is not Financial advice thanks again for listening [Music]