Update on p2pool integration and proposal

We had a very intense discussion yesterday about this. And my opinion is to drop the idea of p2pool. Here are the reasons of it:

  • it's a 5 year old idea and it's extremely inefficient due to network latency. This can't be solved by an implementation in the main chain.

  • there are new concepts out there which solve the same problem in a different way like betterhash

  • for me blockchain is about freedom and the right to decide what to use. If it gets implemented mandatory the key element for the consensus making is changed completely and then untrusted in my mind. Make p2pool build into the chain is fine but it should be optional so everyone can choose to mine solo with all the consequences.

  • many other coins used p2pool but saw the down side of it and dropped p2pool. Litecoin has it and nearly no one uses it.

There are many more reasons for me to not do it.

A few great links about betterhash:

https://www.ccn.com/betterhash-bitcoin-core-dev-proposes-new-protocols-to-decentralize-bitcoin-mining/amp/

https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki

One note is doing mini-blocks also does not prevent doing betterhash for pools also.

#3 sounds like the best option to me. Having it be native/proprietary will add some character to the chain/wallet and will just be another thing to set us apart! Also, I personally think that it's early enough still in adoption that a fork is manageable with Verium. It's the best time to do it if there ever was to be a good time.

My vision of the perfect solution is if it's built in to the wallet in a way where people just click the mine button in which they'll see little payments coming in more frequently, regardless of how it's accomplished on the back-end.

I'm for #3 as well, just for the fact that this is a possible solution to centralization of mining which has always been an issue in all blockchain projects. No one has a done a fully integrated baked in P2Pool solution like this. We need to bring more miners in and have less centeralization to activate the Binary-Chain (AUXPow).

#3 for me

Hey, I haven’t been around much, but I’ll give my two cents.

First, as time has gone on, I’ve soured on PoW in general due to articles like this (https://blog.sia.tech/the-state-of-cryptocurrency-mining-538004a37f9b). I’m not sure an ASIC free PoW is possible, and it seems big miner manufacturers are bad dudes (especially Bitmain). I fear that over time, if VRM becomes popular, we will end up in the same boat as all PoW coins, which seems bad.

Secondly, about the proposal. I think P2Pool is a failed experiment, so I don’t think implementing it is a good idea. I think the on-protocol pool is a good idea, so my vote would be for #3. However, I don’t know if a 10x decrease is good enough to avoid pools. In the current BTC mining environment even a 100x difficulty decrease would not be enough for a small ASIC farm to avoid having to use a pool.

I’ve been thinking about a solution to this problem: a hybrid PoW/PoS system, where blocks are created by PoW as usual, but rewards are distributed by a PoS-like mechanism.

People would mine special transactions as well as blocks. These special transactions (separate from the regular transactions) would act like shares of a pool (they have a difficulty that is less than the current block difficulty, but greater than some minimum amount) and these would be included in blocks.

The block reward would then be distributed to random share holders, in proportion to the amount of shares they have, and these shares would be consumed. Eventually someone mining a share would create a block, and they could broadcast it to the network.

If you set the minimum difficulty such that a small miner would get a share every day or so, you could have it, so people could use this to get rewards even if a very large number of miners were mining.

You might need a mechanism to prevent large miners from spamming the chain with shares, but I was thinking you could have a system where higher difficulty shares get proportionally more rewards, so big miners could put in a smaller number of high difficulty shares instead of a huge number of lower difficulty ones.

This might be too complex to implement right away, but a 10x difficulty reduction won’t be useful forever, so it might be a good idea to keep in mind.

@verilisk interesting comment. So in the on-chain share mechanism a 10x faster blocktime would also include the powt blocktime decrease mechanism which is proportional to computational power on the network. So that mini-block would come every 24 secs now but as the regular blocktime decreases so would the mini-blocks time. However the key mechanism is not merely a decrease in difficulty but a way to quantify mining shares over a period of time. So for instance if you get one of these mini-blocks over the share averaging period, you would be able to earn a fraction of a full block reward. So the key parameters determined finally by testnet will maximize the frequency of rewards for small miners while minimizing the latency and data added to the chain for this mechanism.

I'm still undecided on what I think is the best approach, so for now just some comments, mostly regarding the existing p2pool:

@effectstocause said in Update on p2pool integration and proposal:

There are a couple issues, one is the Verium hashes are intensive to check and a python implementation is orders of magnitude slower than the c/assembly we have built into the wallet. There are ways to incorporate c and assembly into python but it’s sort of a hack and will never give the same performance.

I'm not convinced that writing the hash function in C/assembly and calling it from Python is necessarily a hack. People move performance-critical functions to native code all the time. And are you sure this will be significantly slower than the current C implementation?

The other major issue with p2pool in general is that because there is all these decentralized hashes checked by the p2pool network there is extra overhead and p2pool on any network yields a lower mining efficiency and even more so in our case because our hashes are cpu-intensive purposefully.

Just so we get some numbers - how many of these decentralized hashes typically appear per real block? 100? An average PC can do around 1-2kH per minute. This would mean about 2% overhead if we assume a 4 minute block time, more on a smartphone. I guess the overhead needs to be less than 1% (typical pool fee) to give people enough incentive to use it.

More generally, do we have any statistics about the distribution of the global hashrate? How much is from small/medium/large miners? Why do not more people mine solo even though it's more profitable in the long term? This would help make the right decisions if the main goal is to move more hashes to solo mining. For example, depending on the numbers it might be possible to give better incentives to medium/large miners to mine solo, so that small miners can keep using pools.

The one that brings more ease of use for non technical people.

Remember VRM is aiming to be mined on CPU's and cellphones, we want apps and wallets to start quickly mining after someone installs the software downloaded from official sources, we could make an official VRM pool and offer a on click miner solution but i think that goes against the project objective of decentralization. That could be done by pool owners tho.

so N°3 is the one im feeling the most.

cheers

I like proposal #3 but I have a concern regarding the POWT protocol. Does it work like a linear function where as the hash power of the network increases the block time proportionally decreases, or does it work logarithmically? Because I see problems in both cases.

If POWT works linearly, then if an enormous amount of miners join the network we would see mini blocks being solved multiple times a second causing issues such as forks and orphaned blocks.

If POWT works logarthimically then as more miners join the network, increasing the block difficulty, we would see miners go back to pools.

@maxwell wasn't there something like a technical whitepaper pdf explaining PoWT on the old website ? can't find it on the new one...

@g4b said in Update on p2pool integration and proposal:

@maxwell wasn't there something like a technical whitepaper pdf explaining PoWT on the old website ? can't find it on the new one...

I don't think there is a whitepaper for Verium/PoWT, you'll need to look at the implementation

Will physical distance affect any of these choices?
Anecdotally and totally unscientifically, I noticed a difference when mining with a pool in Singapore vs USA.
With the same equipment I mined the same amount in a USA pool in 12 days as I mined in a Singapore pool in 10 days.
Would miners be connecting to other nodes for the getwork? Would it be dependent on the closest fullnode?
My personal choice is number 3, even if it means slightly less Verium for me. I am honestly in this for the tech. I think that it is important to build the system and make it understandable. And this solution seems the most transparent and easy to visualize.

Hey Guys,

Good comments and questions. Here is some data and modeling to answer some of the questions and show what I'm currently working on. I'm currently trying to derive the mini-block time formula (which is not included in this doc yet as I'm testing different ones, currently static) to minimize time between getting mini-blocks for low hashrate hardware while keeping the orphan rate as low as possible as well. To model the viability of this approach long term. There is also equations for modeling the verium regular blocks as well. https://docs.google.com/spreadsheets/d/1-D2GsWpYmCiWQlKueeG3on6w87UsGjGB0wx1xljff7w/edit?usp=sharing

@effectstocause In reality the litecoin growth rate is not accurate as it includes implicit growth from GPU and ASICS, which presumably we will not have. Additionally it assumes the entire network runs on mobile bandwidth. So this is worst case scenario. I’ll also make a more moderate estimate tab for some imo more realistic projections.

So after much modeling and working through possible implementations. I’ve realized there is a much simpler way to do this that doesn’t alter the chain dynamics at all and just utilizes some of the existing chain data space. Instead of mini-blocks we can mine a special transaction that solves a hash for the next block at a difficulty lower than what is needed for a block but higher than a global minimum. So this in no way changes the procedure of mining it just registers lower difficulty hashes that meet a minimum target as a mining share. This special transaction has only the miners address and the hash they solved. Upon passing a block miners prove all special mined transactions hashes meet difficulty minimum and then this address and difficulty at which the hash is found is used to calculate proportional rewards to that address in future blocks.

Log in to reply