🎙️

S1E11: Dune Analytics with TBIQ

Date
June 27, 2022
Timestamps

0:00 Intro • 1:33 TBIQ intro • 7:12 Integrating community feedback • 12:10 Subgraph discussion • 20:47 Interesting data in Beanstalk • 27:42 Yield curve in the Pod marketplace • 30:10 Machine learning in crypto • 42:00 Useful data • 46:05 Data work after Beanstalk Replant • 49:55 Outro

Type
The Bean Pod

Recordings

Notes

TBIQ intro

  • Resident Dune expert; takes on-chain information and makes it both informative and engaging.
  • Machine learning engineer for a financial technology company.
  • Helped build the Beanstalk Dune dashboard

Integrating community feedback

  • Early on, took a lot of suggestions for metrics to track or graphs to produce via a Google doc and Discord channels.
  • A large proportion of what’s on the dashboard was community sourced

Subgraph discussion

  • Dune is a great sandbox, but not as good as a long term data analytics solution, so they’ve been doing a lot of work with subgraph, which is a decentralized data indexing protocol.
  • Had the attack not happened, they were at the point where they were focusing more on developing charts for the website or building out the subgraph as opposed to Dune.
  • We have two separate subgraphs, one for the Bean token and one for the Beanstalk protocol. The purpose is to structure information about the state over time in a way that makes it easily queryable.
  • A well designed subgraph makes it easy to access a lot of data by the application interface. It takes a lot of work to handle events in order to reconstruct the state over time, but once that is done it is easy to work with.

Interesting data in Beanstalk

  • The Pod marketplace has been some of TBIQ’s favorite data to explore.
  • Prior to the attack, a goal had been to come up with a robust, up-to-date yield curve for the Pod marketplace to eventually submit a proposal to Fiat DAO to use Pods as collateral for loans.
  • You could clearly see the pricing change along with user sentiment as people got more faith in the protocol.
  • Will be interesting to see the dynamic after relaunch, with the addition of fertilizer and ripe and unripe assets.

Machine learning in crypto

  • Machine learning won’t be something to be explored with Beanstalk in the near term because cryptoeconomic systems are very dependent on the behavior of humans, and Beanstalk will need to scale before large enough samples will be available.
  • Specific things like Soil demand modeling and incentive structures can make use of certain statistical models, but it will be some time before black box machine learning models will be effective.
  • There is a lot of data generated on crypto platforms, but the vast majority of it is not useful for analysis. Data sets need to be collected, curated, and presented with the right questions for anything useful to come of it.

Data work after Beanstalk Replant

  • TBIQ will be watching closely to see where people are exiting the protocol and taking haircuts. That will offer insights for the future of the protocol.
  • Also looking at the Pod marketplace to see how the pod pricing will change, given the change in market sentiment and 1/3 of mints going to pay back Fertilizer.

Transcript

welcome to the beanpod a podcast about decentralized finance and the beanstalk protocol i'm your host rex before we get started we always want to remind everyone that on this podcast we are very optimistic about decentralized finance in general and beanstalk in particular with that being said three things first always do your own research before you invest in anything especially what we talk about here on the show second while you're doing that research try to find as many well-developed opposing viewpoints as possible to get the best overall picture and third never ever invest money that you can't afford to lose or at least be without for a while and with that on with the show on this episode of the pod i'm going to be talking with tbic those of you who are familiar with beanstalk will recognize this farmer as our resident dune expert for those of you who are less familiar tbik is a contributor that helps beanstalk take on-chain information and make it both informative and engaging we'll be talking about working with beanstalk's data some projects that he's looking forward to in the future and maybe even talk a little bit about advanced analytics and machine learning well tbc welcome to the podcast great to have you hey there rex uh thanks for having me it's good to be here absolutely so how about to to start us off how would you just tell us a little bit about yourself uh sure sure thing so i'm tebik as i'm known throughout the community um in my normal life i'm a machine learning engineer kind of a financial technology company and i'd say within the i've been involved in crypto crypto tangentially for the last few years i got in around 27 2018 um didn't have a great understanding of the space was mostly just doing speculative investments as many of us were back then and got burned a couple different times and that kind of i got out of the space for a little bit because i just uh i suffered some serious financial losses so i didn't actually feel like i was uh there was much of value here and i hadn't really uh i wasn't really taking a thesis driven approach for why i was here what i was doing so i just left it and then um come 2021 i started hearing more and more about things going on in the crypto ecosystem i had a couple friends from high school who actually got into jobs in the industry so i started paying attention again and i'm really glad i did because initially i was just i got back in through the framework of just investing but the more and more i i dug into what was going on in the crypto ecosystem it kind of felt like the the promise things or i guess the the overarching goals that crypto has always supposedly been about were starting to materialize when uh back in 2017 2018 there was a lot of talk and very little substance and that was one thing that really frustrated me about the space and that that's definitely still true to a degree today like we're very early on in the journey of achieving the goal of kind of open and permissions protocols serving as kind of the backbone of a parallel global financial system um but yeah as i started to like read and learn more about the ecosystem as a whole i got increasingly excited about it so um at the same time i was kind of unhappy in my day job i actually was doing something a little different that i was doing now um and i've kind of told myself that why not look at crypto as kind of a career opportunity and i had some friends doing it like i mentioned before and they seemed to be having a really good time so i talked to them about what they were doing and then kind of went down the rabbit holes of dao's and um i think dune analytics was kind of the first place i started i realized that was kind of one of the standards across the industry for dows and other protocol teams to kind of organize information about their protocols from either a financial point of view or protocol health point of view it just seemed like a really powerful tool and kind of new paradigm of seamlessly and frictionlessly working with um like open data on a platform that was kind of built to rapid prototype and share things with others and that got me really excited so i was playing around with dune a lot and i was just looking for [Music] concepts to explore so that i could build out data dashboards for them because that's one of the uh that's one of the primary ways i learn or just the way i learn best and eventually just through serendipity i i was led into the beanstalk discord and i think a few days prior to getting into the discord they actually had to get data on dune analytics you need to go through something called the decoding process which is basically dune gets the address and the abi of a smart contract deployed on ethereum and given that information they then create some data tables corresponding to that smart contract state and start monitoring data that is emitted from that smart contract and is stored on chain as time goes on um so there wasn't really anything built out for beanstalk at that point and i just kind of hopped into the chat started getting to know some people in the community and also exploring like is there an opportunity for me to help here and once i determined that there was i just started kind of building some stuff and then uh that eventually led me to be a a doubt contributor and actually get paid for my contributions which was really cool i didn't set out on this uh this journey initially thinking that was the the end goal but i've really really enjoyed my time working with the the dao and just everyone else who's a part of beanstalk farms that's fantastic so when i think of how um you know your contribution fit into the growth of the organization it does seem you use it you use the term serendipity and it really it really that's how i think the rest of us saw that as well it seemed to be perfect timing just as members of the community were saying okay you know how can we get some more powerful data visualizations it as has happened a number of different times through the community's history somebody just kind of appears and they're like oh hey this just happens to be something i'm either a very knowledgeable about or b very passionate about and i'd love to be involved and you know you're probably one of the primary stories of that happening within beanstalk which is fantastic um one of the things that i've thought about a number of times as we've talked or as i've you know heard you present or talked through different things you're working on as the the let's say data analysis and visualization work with beanstalk has grown and gotten more complicated what um what's the kind of relationship or proportion of times where you've either found something interesting or started looking for something interesting on your own versus taking feedback from the community in terms of a direction like what what does that proportion look like are you spending most of your time looking at requests from individuals or doing let's say like exploration on your own how does that that fit together yeah i think that kind of varies on a week by week basis so i would say when i first started doing this work um i was coming in with very little prior understanding of the protocol and how it operated so um in the beginning i actually set up kind of a google doc that i'd linked at the top of the dashboard it was kind of open edit access and i basically advertised that across the community and i was like hey if you um are really involved in the community here or just interested in learning more about beanstalk and kind of the data surrounding the protocol and you have a good idea for some metric that i might start tracking or a graph that you think might be interesting i collected a ton of suggestions in that document and i think the i got a lot of feedback from that document and also initially i just was frequently asking people in the community for suggestions i set up channels in the discord to facilitate suggestions and people were always having conversations about interesting metrics that they were tracking about beanstalk and i would often just see those in the chat and then add them to this long list so i would say a large proportion of what is now on the dashboard was community source and i guess the other place i got ideas was the the website itself so the website had a lot of analytics but um the rate at which you can iterate on the the visual design and also um interactions between different metrics is a lot slower on the website um you have to do we have just a higher standard of testing um like that's kind of how we present ourselves to the world our website so the stakes are a lot higher if you have bugs but on the dune dashboard like i'm constantly working on a bunch of different things so i'm always like oh we have this piece of information in this tab on the website and this other piece of information on this other tab in the website but they're kind of related metrics so what if we combine them into a single view and provided end users kind of a single unified view of some more compound or derivative metric um that is a little bit more insightful or useful sure it sounds like it sounds like dune offers you you know really a sandbox where you can kind of get in and play around and see what makes sense and what doesn't and maybe even develop better questions to ask afterwards that makes sense yeah exactly and i think um the the long-term vision for uh the data infrastructure for beanstalk farms dune is kind of a a short-term solution because we realize like it's really great for if you just have a quick question going having easy access to all a lot of the data you might need to help answer that question so that you can kind of do some initial analyses to see if it's like something that's worth pursuing with more effort and maybe integrating on the website in the future so yeah it's it's really great for that but it has a lot of limitations um that make it not optimal as kind of our long-term solution for data analytics so we were doing a lot of work with um the sub graph which is kind of a decentralized data indexing uh protocol um and long term that's where we wanted to move a lot of our analytics to so had the hack not happened i might have been at the point where i was focusing less on dune and working more on actually developing charts for our website or building out the actual sub graphs that contain all of the data for the beanstalk protocol and also the the bean token i see so help me understand subgraph a little bit more i mean i am i am a novice at all of these things and so familiar with the term sub graph familiar with the general concept but how does that how is that an evolution uh on you know capabilities or opportunities in dune yeah so um let me think about how to to describe it the subgraph is uh i will say like i'm not kind of our subgraph expert but i do understand it from a high level uh but essentially it's there's this uh there's something called the graph network um i don't know if it's uh ethereum based or based on some other chain but just know that there's some kind of data availability later layer where they're actually like storing these metrics um and to set up a subgraph you have to do a couple things like one you have to you have to have a protocol about uh to analyze in this case it's bean stock the bean stock smart contract and also the the bean token smart contract so we have two separate sub graphs one for the bean token and one for the beanstalk protocol the purpose of a subgraph is to structure information about protocol state over time and kind of store it in a way that makes it easily queryable to facilitate things like let's say we have our our user interface app.beam.money if you have a well-designed subgraph in any tab or section of the website where you're trying to present some information to a user whether it be about the balance in their account or maybe some analytics about their wallet history a well-designed sub-graph makes that like a very easy task to do um but there's kind of some work that needs to be done up front because the thing about smart contracts is they have a lot of complex internal state like smart contracts or programs and they operate on chain um and they have lots of internal variables and things they're representing but not all of that information is easily visible as a developer um so the smart contract is it's it has an interface defined and you can access some of those pieces of information but other pieces of information are really hard to get if you're just trying to call functions on the contract directly so the sub graph allows you to define logic to kind of process uh events uh which are a kind of like data that is emitted from smart contracts and stored on chain um over time so you basically write these uh separate functions for processing event streams and each event kind of describes a user action so if a user deposits beans in the silo there might be a bean deposit event emitted if a user then withdraws beans from the silo there might be a few events associated with that action that are also emitted so you have to write these scripts that interpret these streams of events and kind of reconstruct state or compute interesting metrics about the smart contract that you're tracking and after you have implemented all of this logic it gives you again a nice organized structure that is easy to query that kind of describes the state of the beanstalk smart contract or the bean token smart contract over time so if you're interested in seeing how many silo deposits were there over the last three months that is a question that the sub graph would easily allow you to answer but in if you didn't have a sub graph and you were just trying to interact with the smart contract directly that would be a very difficult question to answer yeah so it's not it almost sounds like you know dune is is looking directly at on chain data whereas the subgraph essentially is almost its own data ecosystem that has a better data let's say sequestration from the on-chain information more complex information based on the the type of information it's able to collect and then in turn because of those two things give someone that's performing like visualization or you know some type of complex analysis gives them a better place to perform those functions because that that that data is a lot more robust and it's it's not you're not calling directly on chain in terms of your information query right yeah you're exactly right there so on dune they take events and like various things that happen on chain and kind of translate those to tables in a sql database and then you use the sql language to query those tables but the the structure of the tables dune the dune platform determines the structure of those tables whereas with the graph it's kind of a developer data platform so if you're building an application or a protocol on chain you have control over the schema of your data and different protocols require wildly different schemas just because from an analytics point of view from uh surfacing insights and information to end users point of view the kinds of questions you're asking about say a lending protocol like compound or ave would be very different from the kind of question you'd be asking about like a decentralized uh credit-based stablecoin protocol like beanstalk so the schemas if the development teams can design their schemas in different ways to suit their needs best like that's good for everyone gotcha so to to use one more maybe let's i don't even know if tradfi is appropriate um let's just say modern corporate analogy it almost strikes me like dune makes me think of like a power bi in terms of its ability to take data and and maybe create you know some complex visualizations whereas the subgraph has a lot more depth potentially because you've got a lot more data and a lot more pieces that you can put together in unique ways yeah to flesh out that analogy it's like a power bi tool but with kind of an integrated data back end so the the data that you could access on the platform is though it's growing as time goes on because it's pulling in data in near real time it's kind of fixed the data that you can work with you don't have much control over it you just kind of work with the dashboards and you can work on top of it but you can't edit at the base layer whereas the subgraph like you said is far more flexible and a better long-term data solution but a lot more there's a lot more development effort that needs to go into getting a sub-graph operational it's still a very new um protocol i guess one part of the that i didn't cover about the subgraph is that there's a a token it's called it might be called the graph network token i think the tip gear is gnt or it could be there don't quote me on that but basically that token is used to incentivize indexers and indexers are a class of individual who are running graph nodes and basically they have some compute and hardware that they own or operate and they are using that compute to um compute these metrics as defined by developers for what they want their subgraphs to represent and also store that information as well so it's a very web3 native data solution whereas dune analytics is a though it's a pretty web3 native company um the data is siloed on their platform whereas with the graph it's it exists on kind of an open permissionless network which i believe is ethereum i'm not sure if they support other chains but i imagine things like that might be in their their roadmap for the future sure so to loop back in you know so you get involved in starting to do you know data analysis and visualization for beanstalk start taking recommendations from the community you know do some of your own exploration what has been the most surprising either we can you can talk about surprising pieces of data um surprising developments what what's really been unexpected for you as you've gotten more and more involved from the beanstalk side in the the analytics and visualization yeah so um i think that some of my favorite data that i've explored uh related to bead stock has to do with the the farmer's market and um i guess just assuming our audience might not know what that is um pods the debt asset of the beanstalk are kind of similar in the traditional finance world to something called a zero coupon bond but it's been essentially uh a bond representing the protocol's debt and you buy it at a point in time and at the point in time you buy it is it has a fixed interest rate so if you spend 100 beans uh at an interest rate of 20 percent um at some point in the future you will be paid back 120 beans but currently you will just have 120 pods and those pods will go to the end of a pod line and you will have to wait until those pods reach the front of the line in order to be able to convert the pods to beans so there's no set time frame on one that might happen there's a lot of heuristics you could use to estimate that but that's just kind of one of the core functions of the protocol and i found it was really interesting to see how the the pod marketplace which is the a protocol native marketplace where users can sell pods which are a non-fungible token they're not actually an nft but uh they're just represented internally by the protocol so there's no erc20 token or there's just no token standard representing what a pod is its internal protocol state and this is the only way to exit a pod position and exit that pod position in exchange for beams or vice versa if you have beans and you want to hop into some point of the pod line that isn't the back you have to go look at the marketplace and see what the current exchange rate is so maybe you want to get close to the front of the line or maybe you're interested in you're a little more long-term oriented you want to get in the middle of the line there's a lot of interesting pricing dynamics to how people have priced pods at various points in line and how those dynamics have changed over time and i think that's been the most interesting thing for me personally to just kind of dig into sure it seems like we were seeing this develop you know prior to the exploit uh something like a a relatively reliable pricing curve in the pod marketplace and and so i look at this so i'm i'm i'm not a uh not a data scientist by trade i'm actually a psychologist by trade when i look at this i'm thinking about you know rational actors in a market and so i've been kind of anxiously watching that myself to see how that curve develops over time and how it tightens up and what what i'm excited to see or will be excited to see in the future is yeah something like a really really tight curve that we could have some some pretty solid prediction around and i know there's been a lot of discussion you know especially let's say a couple months back now you know well well prior to the exploit about even ways that um getting a real tight prediction curve around the pod marketplace may present some new opportunities to interact with other protocols that may be looking at things like bond opportunities around that specific time frame of of pod maturity yeah no that's really fascinating stuff and i was actually the one working on using the history of pod mark or farmer's marketplace pod orders and pod fills to come up with a like robust up-to-date yield curve um and the idea there was to eventually submit a proposal to fiat dao which has the capability of using zero coupon bonds as collateral to underwrite loans and that was one of the things that we were really excited about i don't know if it was formally detailed on the roadmap but it was a concept that we were exploring and um yeah it's it's just it's just a shame that the exploit occurred when it did because that was that was moving along really effectively and as i was working on building out that model i was seeing just over the course of weeks or even on the timeline of days the the pod marketplace yield curve when you plot the place in line of a pod on the x-axis and the price per pod on the y-axis and you define a yield curve on kind of a two-dimensional grid system using those axes that had been sliding up and to the right indicating that people were willing to pay more per pod at a fixed point in line as time went on due to increased faith in the system um and i was just watching that unfold in real time and i thought that was real interest really interesting like i don't think that's a trend that would um continue forever obviously it wouldn't so i'm sure at some points like during price discovery phases that curve would shift up and to the right um but at other times it might chip down into the left if the market was a little overheated um but yeah just really fascinating stuff to watch i'm hoping the reboot goes well so we can uh continue seeing how this evolves that'll have some really interesting implications for pod pricing yeah i could not agree more eventually you get to the point where you do have some really some really robust insights you can either share with the community or you know just with the market in general and and just like you said you know those opportunities with fiat there are there's some really exciting opportunities out on the horizon and i yeah when when the restart happens you know obviously all the different functions bean stock go back into place including the marketplace and yeah the the interesting additional factor in terms of the marketplace will be the fact that there's this additional this additional line for fertilizer holders and and i think this in and of itself will be a bit of a case study about you know how does the existence of fertilizer influence activity in the pod marketplace so you know is there a point where individuals say well you know because of you know the potential you know price difference over time between you know buying pods in the marketplace right now at such and such a yield and waiting you know three months or whatever i have the opportunity to see a greater return than i would buying fertilizer at 20 humidity and yet again to see participants asking those questions and um you know one of the the um the groups that has been really interesting to watch are the omis um uh a handful have been you know pretty heavily involved in the protocol and specifically in the marketplace doing a lot of um a lot of personal analysis and making recommendations about you know how to to make the most of the marketplace and um uh i know that that group is kind of hung around so again it'll be interesting to see how they how they work through these these opportunity comparisons yeah if we thought the game theory of beanstalk was complicated before it's about to get twice as complicated yes and there's uh adding on to that there's also the uh ripe reverse unripe beans kind of the the vesting that is built back into the repayment structure um post exploit i think that will further change the the dynamics here in the pod marketplace and also as it relates to silo withdrawals so it's it's anybody's guess as to what will happen but i can all i know for sure is that it'll be fascinating to watch unfold in real time could not agree more so i wanna i want to turn the conversation just a little bit towards machine learning and you know as with so many things i'm a novice um find it interesting so when i think about what opportunities there might be uh in terms of you know so obviously on-chain data is is a wealth we talked about subgraph you know being an even more robust wealth of information if you're looking at you know machine learning applications obviously that's a cornerstone when i've had interactions with machine learning applications usually they have been primarily focused with like recommendations for specific actions so in our case it could be when to potentially make a purchase do you see as as the the available data becomes more robust and and even as machine learning itself continues to develop do you see more of an interaction you know maybe let's say generally between machine learning applications and and you know web three crypto um environments in general and and more specifically with beanstalk do you see any kind of potential interaction from a machine learning standpoint yeah i definitely think as we move on uh i think as we move forward through time that that will be an increasingly large opportunity for um developers who have that skill set to build into existing products and new products um and to market and sell to people in the community because there's a lot of value there as it relates to bean stock specifically it hasn't it isn't something that i would really be exploring on a near-term uh time horizon and i think one of the fundamental reasons for that is in cryptoeconomic systems are very dependent on the behavior of humans and when they operate at smaller scales and i would still classify beanstalk as a rather small protocol um given like we know how many like active siloers there are active sewers there are like it's a few thousand people who have interacted with the protocol um since its inception even though it has at peak achieved over 100 million dollar market cap the fact that the sample size is so small and that there are so many different incentive structures at play i think a lot of machine learning analysis on beanstalk data in particular would be kind of difficult what i do think is more interesting is using simpler uh kind of white box statistical models to fine-tune specific assets of the protocol so as we move forward things like um figuring out how to measure demand for soil over time i know there's been a lot of interesting conversation at the discord of with people who have backgrounds and control theory about modeling um demand for soil as kind of like a pid controller and there's a lot of statistical inference and analysis that goes on in order to make decisions about how to incentivize people to do specific actions when the protocol is in a specific state so i think there's a ton of opportunity there for using more opaque and well understood statistical methods but things like black box machine learning models i don't think that's going to be all that useful for determining when to buy and sell your beans in the in the near future but in the ecosystem at large i think for protocols that are operating at much larger scale um we're going to see some really interesting machine learning based products within the the next few years hopefully but it is interesting that even though the the industry is over 10 years old at this point um there still isn't all that much of that going on with on-chain data like if i'm just trying to rack my brain right now for any examples i know of products in the crypto ecosystem that are heavily machine learning based and that i'm not coming up with any any answers yeah so do you think um when i think about the general development of the cryptocurrency ecosystem finance as compared to you know the again machine learning novice here but the the applications that we're starting to hear about in traditional finance or traditional organizations in general so much of the existing applications seem to be very focused on optimization of decision making or um identification of new insights products product markets etc and as you say that and you know you as you mentioned that you know you're you're having a hard time finding or thinking of specific examples of ml applications in you know web3 or d5 or cryptocurrency what goes through my mind is we may not be to the point in the let's just say the cryptocurrency ecosystem to where those more you know fine-tune identification or decision making processes are necessary we're still so like you said despite being you know 10 years in i think we're still in this part of market development where there's a lot of um trial and error that is done without that fine-tuned data analysis it's more like you know we're talking about things like first principles and right now we're still in in cryptocurrency we're still determining who has the best first principles and it may be some time you know until those those projects that have really solid first principles or you know foundational components are the norm as opposed to the exception which you know you talk about getting in in 2017-2018 great example of a time period when a lot of those projects that did not have really solid you know foundations or first principles were driven out of the market you know so maybe we're just getting to the point where through these cycles of boom and bust the refinement is coming to where hopefully at some point in the you know relatively near future there are enough good projects out there that are saying okay you know my next my next move is a project is something like something like a you know either you know a unique product identification process or or improve decision making which requires some of that machine learning technology maybe we're just not there yet yeah i agree we're still in the early stage of this all we're figuring out like what primitives work uh what don't work and there's just it's such a large design space and i think there's there's so much data out there on these these platforms but at the end of the day i'd imagine the percentage of it that's actually useful for analysis is quite small as a function of like the total number of data generated like data that exists on chain and we've been talking a lot about to graph today and about dune analytics and these are still very like early stage um data tools they serve kind of different purposes but they're kind of foundational pieces of web 3 and crypto data infrastructure which is still kind of in its nascency it's obviously attracted a lot of development ever recently and it's improved by leaps and bounds in the last few years even but compared to data tools and kind of like the web 2 ecosystem like we are still so far behind and the key to machine learning is having high quantity of data but also having a high quality of data and i think the quantity for us might be there but the quality just isn't quite there so as a community we're kind of in the early stages of exploration and figuring out like what works well and what doesn't then i see a lot of parallels here too um just other types of machine learning and how those have developed over time like i don't know if you've are you familiar with dali it's either dolly or dolly too it's kind of like the many hundred hundreds of trillions of parameters language or uh yeah no it's sorry it's not a language model it's a computer vision model but um you may have seen some examples of this online you can give it a prompt um and it will output an image so you could say a 1950s style advertisement of sasquatch mowing the lawn that's an example i saw the other day and it just generates like visually coherent images based on that prompt and it's just as good as a bunch of other digital art i've seen from like a variety of people online so it's just it's amazing that we've gotten to that point but if i think back five years ago because just someone as someone who's had an interest in machine learning over an extended period of time even though i'm still quite young um i've kind of always been vaguely aware of like what the current state of the art is in terms of that that problem in particular like text to image based like language image generation um so there's just been a multiple order of magnitude improvement in like the outputs of these models over the last five years and i think that's because there was so much research that went into figuring out how to structure these input data sets figuring out how to label these input data sets and figuring out the right objective functions and loss functions to use to kind of tweak and tune your models so that it would really give you interesting and useful outputs and i think that's a good analog to just kind of where we might be in the the crypto space obviously computer vision is a very different problem domain but i think the the fundamental uh path of building out data infrastructure and then developing a very deep understanding of the data and then using that data to build really useful predictive machine learning based models um we're on that same path and i think at some point in the future we'll there will be some really cool things being built out there but um yeah i'm just interested to watch it all unfold and hopefully uh contribute in some way absolutely it took me a moment to contextualize when you were talking about dolly but not too long ago i got pulled down a rabbit hole looking at machine learning driven art and you know was reading a couple articles on you know some of the real early um image generation uh mechanisms and then you know i i remember seeing something about dolly and it was exactly that that text input that eventually generated an image and that is that's it's absolutely fascinating the other thing that is very interesting that um you mentioned just a minute ago that that really lingers with me is that idea of having a ton of data available in the question really starting to be its usefulness again when i think about traditional finance or or traditional corporate uh approaches in general you know a multitude of of industries and specific corporations that are looking at things like machine learning and essentially just throwing data at it saying okay what can we what data can we collect electronically here is this giant basket of stuff and the problem going from well hey you know we haven't had we haven't had xyz you know variables available to us in the past now all of a sudden those variables are available that data is available now the question is well you know what what do we use what is useful how is it useful how do we fashion it in a way that can be appropriately analyzed visualize communicated so that it seems like the the questions are changing in in traditional business spaces right as they're changing in the crypto space as well yeah no i agree i think kind of what you're saying there's there's like two fundamental problems here it's like one you're working within a given data domain so um in our case it's the cryptocurrency ecosystem and you can think about all the data as everything that exists on chain or anything related to operations that users perform when they're interacting with the ecosystem this could also be something off chain it could be i don't know if you're if your coin base your web traffic might hold a ton of insights related to where in the world people are coming from to interact with your platform so it's not just on-chain stuff but if you take all of that data as the data domain the first problem is kind of exploring the data domain from a very high level and identifying sets of features that could be related and that you could then take to a smaller more focused analysis and then used to predict something more interesting about a particular problem within that larger problem domain so to get to two things sifting through all the data there is and then once you've found a few interesting things what do you build with those few interesting things and can you make a model that predicts something useful related to the pricing of an asset that allows you to make better investment decisions or maybe you're using social media data to sift through um user impressions of all the alternative layer ones within the the crypto ecosystem because you're trying to find some competitive advantage either as an investor or a developer in order to best suit yourself it is going to be very interesting to see how as as more folks with that skill set find themselves interacting with weather organizations uh with blockchain technology how how that technology how the how machine learning itself will will evolve so one final thing to kind of wrap us up for today um so we're you know we're sitting here um recording this in early june the barn raise is underway um we're probably about about three weeks away from protocol restart when when the protocol restarts when beanstalk replants and gets back underway what are the first things that you're going to be looking at working on thinking about um so i would say like first and foremost like what i want to understand is for the subset of the people impacted by the exploit who are um most wanting to just run for the door to get out what at what point are they going to do that because when the protocol restarts if you're only vesting half a percent of what you initially lost like even if you want to run for the door right away because you you know you've given up all hope on the protocol functioning successfully even though this was an attack on the governance facet not the economic model itself um is half a percent going to be enough incentive for people to walk out the door so i'll be closely watching the levels at which there are spikes in um exits from the silo for people who are holding a large quantity of unripe beans because i think that'll have a lot of implications for the future health of the protocol like the most of that um short-term thinking that the protocol could shed like people who aren't long-term aligned who are want to exit their silo positions the more people that do that the healthier the spot of a spot the protocol is going to be it because that's just less that it has to pay back so it's kind of lowering its burden of debt um so that's that's kind of primary among the things i'm going to be watching i think secondary to that would be the pod marketplace personally i was pretty pretty heavy in pods so as a dow contributor my plan was kind of okay for the first three months like i'm going to be really heavy on pods because i don't know if the weather's ever going to be this high again and then um right around the time the exploit happened like that that paycheck i was like okay now everything i got is going into the silo so yeah not not the best idea in retrospect but at the time i thought it was a pretty good plan um but i've got some pods in the pod line and i'm interested to see what the how the yield curve for pod pricing is going to change coming right out the gate because obviously the curve is people have lost confidence in the protocol to some degree so pods are not going to be going at the same rate that they were before and also there's now the fact that only a third of new beam bits are going off to paying down the pod line as opposed to 50 pre-exploit so that'll affect the decision calculus of the economic actors within the beanstalk system so i would say those are the the two things that i'll be watching most closely and i'll be trying to develop some some graphs and analytics on the doom dashboard um so that other people can monitor those those factors as well it's funny because those are probably two of my top items as well again from a psychology standpoint what what individuals are ready to to rage quit you know that are ready to say okay i'm out where they're ready to get out and then yeah the remaining remaining participants that are that are looking to take advantage of that marketplace what does the new environment look like for them great stuff definitely appreciate your time tbik um this is fantastic great conversation yeah i really uh appreciate you inviting me on the show this is a lot of fun rex anytime we'll we'll definitely have you back too for sure