https://www.reddit.com/r/GRTTrader/
Vous avez déja vu un reddit plus triste que celui la ?
La bonne nouvelle du jour : nouvelle baisse du taux de stacking sur binance. Voila voila
https://x.com/yanivgraph/status/1699858693721141256?s=46&t=S0tHYeLzYtgQbvIowiOjDg
Le 08 septembre 2023 à 01:11:53 :
https://x.com/yanivgraph/status/1699858693721141256?s=46&t=S0tHYeLzYtgQbvIowiOjDg
Quel enculé
Le 08 septembre 2023 à 01:11:53 :
https://x.com/yanivgraph/status/1699858693721141256?s=46&t=S0tHYeLzYtgQbvIowiOjDg
Voilà où va l'argent des investisseurs
Du nouveau ?
Oui la team a admis nous avoir scammé - je viens voir qu'il a édité son message pour supprimer la partie "quand j'ai voulu alerter sur les disfonctionnement du protocole et calmer l'engouement notre équipe légale m'a gentillement demandé de fermé ma gueule et de laisser tegan faire le marketing"
That3Percent
13h
Hi! I’m Zac at Edge & Node, and I wrote the original Horizon proposal. It is my fault we’ve been silent about it because I’ve taken time to deal with a family emergency. I’m back now and will actively work toward publicly communicating about Horizon. I know there has been much uncertainty and angst. But Graph Horizon has given many renewed hope for the protocol’s future. There’s been more excitement and collaboration across the core devs lately than ever before. I’m thrilled to begin sharing these ideas with you finally.
Since launching the original protocol, we’ve learned a lot about our user’s needs and what steps the protocol must take to be the de-facto standard for decentralized access to the world’s public data. The design of Graph Horizon aims to rebuild The Graph from the ground up using first principles. It solves problems like:
Reliance on oracles and other forms of centralized governance
Beaurocratic processes for integrating new data services and products into the protocol
Permissioned roles, like the arbitrator, council, and gateway
Inefficient tokenomics that punish people for using the protocol by burning their tokens
Confusing and intractable UX for users
Security holes and economic attacks
Rewarding lazy whales at the expense of those who provide value to consumers
Unencapsulated complexity that makes it difficult to evolve the protocol or publish MVP products without breaking everything
Incentives to disintermediate the protocol
Unscalable mechanisms and high per-subgraph overhead
Failure to find product market fit
And more
Part of the hesitance in talking about Horizon stems from the fact that most of the sales pitch is “It’s The Graph I thought we were trying to build, without the problems.” I don’t know if there is a way to sell you on it without tearing apart the current iteration of the protocol and exposing its fundamental issues publicly. But, we understand that to gain the community’s support necessary to improve the protocol, we will need to be radically transparent about the current state of the protocol.
So, that’s what I will do - starting with curation. Tomorrow I will begin writing about the problem curation attempts to address, why it fails, and how we can design a better system. The opinions will be mine and mine alone, not representative of Edge & Node nor The Graph (the latter is not an entity and has no opinion). Buckle up.
le thread est d'une violence, le dev confesse tout
Il dit rien du tout d’alarmant, qu’est ce que tu racontes il est même confiant pour l’avenir
Ce qu’akonadis ne veut surtout pas que vous sachiez
https://www.telekom.com/en/media/media-information/archive/quickly-query-blockchain-data-with-deutsche-telekom-1050102
ce genre de news c'est pour vous faire lacher au bottom
Quelle news?
Bha la news avec deutsche telekom...
Pour info
La Deutsche Telekom AG (DT) est la plus grande société allemande et européenne de télécommunications.
En gros ils font un gros mea culpa sur leurs forum interne ou ils explique qu'ils ont vendu du rêve en dumpant sur le retail pendant le bull run et que leurs tokenomic sont a chier et que le protocole est voué a l'échec sans passer 10 ans a corriger leurs merdes
T'as la suite de son discours stp ?
Tomorrow I will begin writing about the problem curation attempts to address, why it fails, and how we can design a better system. The opinions will be mine and mine alone, not representative of Edge & Node nor The Graph (the latter is not an entity and has no opinion). Buckle up.
Part 1: The purpose of curation & foreshadowing some Horizon concepts.
The Graph protocol, stripped to the bones, requires at least three roles, which I will simplify to caricatures:
The Subgraph developer authors and publishes subgraphs. They play a vital role in growing the usage of The Graph in the same way that smart contract developers grow usage for blockchains. So, ideally, we would like to incentivize subgraph authorship because our whole thesis of decentralization revolves around incentivizing value-add behavior. It is possible that a subgraph developer is altruistic and publishes subgraphs for free as a public good. But ideally, they have some financial motivation. They could derive extrinsic benefits from the Subgraph existing, for example, when the Subgraph enables a dapp. Or, if the Subgraph is valuable to the network, maybe they could take a cut of the queries of the subgraphs themselves. (Unfortunately, the latter incentive mechanism is problematic in the context of The Graph, as we will see. Nonetheless, it is helpful to keep this idea in mind when considering the design goals of curation.) The Subgraph developer wants to publish their Subgraph to The Graph rather than run infrastructure. (If the dapp developer is the only one that can run the infrastructure, the dapp is not decentralized.) Hopefully, subgraph developers want their Subgraph to bring value to consumers.
The consumer demands queries for data indexed by subgraphs. They might be interested in raw data for analytics. More likely, they are using a dapp that depends on dynamic blockchain data. The consumer wants to get interesting data cheaply, verifiably, at low latency, and high reliably.
The indexer is profit-motivated and should be incentivized to serve data to consumers. Ideally, they are rewarded when they bring consumers value (interesting, cheap, low-latency, verifiable queries.) Their core competency is running infrastructure. So, they want decision-making to be automated when possible.
These roles form a symbiotic relationship. Each depends on and benefits from the specialized work of others through the medium of exchange using The Graph protocol! Because of the symbiotic relationship, value for one group can accrue to the others in a flourishing ecosystem. (I apologize if this is pedantic. I promise all of this setup is critical to understanding curation.)
An indexer wants to automate their decision-making but needs to make intelligent choices about which subgraphs to index. They want to profit by maximizing revenue and minimizing costs. To do this, they must know how expensive subgraphs are to index, how much demand there is for queries, and how well they would fare in the market against competing indexers. None of these can be predicted perfectly, but there is value in minimizing uncertainty and risk. (Remember - when you reduce uncertainty, this value can accrue to all protocol users because of the symbiotic relationships that exist through the medium of exchange.)
How can indexers reduce uncertainty to automate this decision-making process? Here are a couple of ideas that don’t pass muster.
Maybe an Indexer could observe historical data about query fees. By observing on-chain payments, an Indexer may gain insight into how much value flows through the network for a particular subgraph and which indexers compete in that market. There are a few problems with this idea. First, if The Graph is permissionless, someone could send large payments for spoofed queries through the system. They could impersonate a consumer and indexer by sending payments to themselves without actually serving queries. There are various reasons to do this, from creating the illusion of demand to just griefing indexers. So, indexers should not trust historical query volume indicators. There is a mitigation, but it’s not foolproof. The protocol could burn some percentage of query fees. (The protocol does this today, unfortunately.) The more you burn, the more of a deterrent to fee spoofing but also the more friction for real consumers and indexers who eat the extra cost for legitimate traffic. Punishing everyone rather than offenders (which, I repeat, is how The Graph approaches problems today) is not just distasteful. It raises a serious question. Why should consumers and indexers send query fees through The Graph at all!?!? Shouldn’t users prefer the cheaper option of transacting via ERC-20 token transfers? (This problem is called “protocol disintermediation” and is one primary motivator for Horizon’s design, but we’re getting ahead of ourselves.) Another problem with observing historical query fees is that it doesn’t solve the bootstrapping problem. When a subgraph is first deployed, there is no historical data that indexers can look at to reduce revenue uncertainty. Yet another problem is that the solution infers a linear cost to regularly publishing query fees on-chain per Subgraph. Linear recurring overhead hurts everybody in the ecosystem. The high prices resulting from this design make The Graph unviable for the long tail. Even without these problems, this solution would be incomplete because it does not address the variable cost of indexing.
One option that should come immediately off the table is central planning. There could be oracles, or we could have permissions on who can pay into The Graph to limit query spoofing. The Graph employs these solutions today, which is unacceptable for a decentralized protocol. Decentralized infers permissionless.
One idea to remove the problem of cost uncertainty would be to use static analysis. Predicting the amount of compute required to execute a function in the general case is impossible due to the halting problem. We could resort to Turing-incomplete languages, but the tradeoff is sacrificing expressiveness, power, and familiarity. Subgraph developers may need to learn new concepts or languages before developing a subgraph. Some Subgraphs may not be possible to build at all. Since subgraph developers grow the ecosystem, taking on this kind of restriction would necessarily reduce the size of the market. Even if we have static analysis, we do not know the size of the data because of data dependencies between the code and chain, as in the case of spawned data sources. Also, some data to be crunched will be produced in future transactions. This is a non-starter.
With a few ideas tossed aside, curation was conceived as a prediction market for query fees. A cut of each Subgraph’s fees is paid into a revenue stream for curators. Profit-motivated curators bring outside information to the chain by purchasing shares. In equilibrium, we might expect that the shares purchased correlate with the size of the revenue stream. This is because if one Subgraph has too much curation and another too little, individual curators would be better off rebalancing. Mix in the wisdom of the crowds and get reliable signals. At first glance, this appears to be a brilliant and elegant solution to our problem. Spam prevention happens naturally through the cost of capital required to own shares over time. The revenue stream offsets that cost of capital. There is an exchange and specialization of services - indexers and consumers pay curators through the curation tax and receive valuable off-chain information predicting demand.
A couple of additions are needed to solve problems with this idea, but these fixes will come at a cost. The first problem is that since each Subgraph is a separate market, we cannot expect enough liquidity or trading volume on each to enable accurate price discovery. Without price discovery, we would lose the accuracy of the signal - the purpose of this system. This is where bonding curves come into play. With bonding curves, the protocol becomes an automated market maker, ensuring there is always liquidity to meet demand for price discovery. Additionally, bonding curves enable the idea that a curator could be rewarded for curating on a subgraph early.
Indexing rewards are not necessary for curation to exist. Reading the above, it is not clear they fix a problem with curation per se. In fact, I will later argue that the two concepts are mutually incompatible and must not be integrated. But, it would be remiss to leave them out of the description of the current system, and I will grit my teeth and try to justify their inclusion as a part of curation.