Update on p2pool integration and proposal

@verilisk interesting comment. So in the on-chain share mechanism a 10x faster blocktime would also include the powt blocktime decrease mechanism which is proportional to computational power on the network. So that mini-block would come every 24 secs now but as the regular blocktime decreases so would the mini-blocks time. However the key mechanism is not merely a decrease in difficulty but a way to quantify mining shares over a period of time. So for instance if you get one of these mini-blocks over the share averaging period, you would be able to earn a fraction of a full block reward. So the key parameters determined finally by testnet will maximize the frequency of rewards for small miners while minimizing the latency and data added to the chain for this mechanism.

I'm still undecided on what I think is the best approach, so for now just some comments, mostly regarding the existing p2pool:

@effectstocause said in Update on p2pool integration and proposal:

There are a couple issues, one is the Verium hashes are intensive to check and a python implementation is orders of magnitude slower than the c/assembly we have built into the wallet. There are ways to incorporate c and assembly into python but it’s sort of a hack and will never give the same performance.

I'm not convinced that writing the hash function in C/assembly and calling it from Python is necessarily a hack. People move performance-critical functions to native code all the time. And are you sure this will be significantly slower than the current C implementation?

The other major issue with p2pool in general is that because there is all these decentralized hashes checked by the p2pool network there is extra overhead and p2pool on any network yields a lower mining efficiency and even more so in our case because our hashes are cpu-intensive purposefully.

Just so we get some numbers - how many of these decentralized hashes typically appear per real block? 100? An average PC can do around 1-2kH per minute. This would mean about 2% overhead if we assume a 4 minute block time, more on a smartphone. I guess the overhead needs to be less than 1% (typical pool fee) to give people enough incentive to use it.

More generally, do we have any statistics about the distribution of the global hashrate? How much is from small/medium/large miners? Why do not more people mine solo even though it's more profitable in the long term? This would help make the right decisions if the main goal is to move more hashes to solo mining. For example, depending on the numbers it might be possible to give better incentives to medium/large miners to mine solo, so that small miners can keep using pools.

The one that brings more ease of use for non technical people.

Remember VRM is aiming to be mined on CPU's and cellphones, we want apps and wallets to start quickly mining after someone installs the software downloaded from official sources, we could make an official VRM pool and offer a on click miner solution but i think that goes against the project objective of decentralization. That could be done by pool owners tho.

so N°3 is the one im feeling the most.

cheers

I like proposal #3 but I have a concern regarding the POWT protocol. Does it work like a linear function where as the hash power of the network increases the block time proportionally decreases, or does it work logarithmically? Because I see problems in both cases.

If POWT works linearly, then if an enormous amount of miners join the network we would see mini blocks being solved multiple times a second causing issues such as forks and orphaned blocks.

If POWT works logarthimically then as more miners join the network, increasing the block difficulty, we would see miners go back to pools.

@maxwell wasn't there something like a technical whitepaper pdf explaining PoWT on the old website ? can't find it on the new one...

@g4b said in Update on p2pool integration and proposal:

@maxwell wasn't there something like a technical whitepaper pdf explaining PoWT on the old website ? can't find it on the new one...

I don't think there is a whitepaper for Verium/PoWT, you'll need to look at the implementation

Will physical distance affect any of these choices?
Anecdotally and totally unscientifically, I noticed a difference when mining with a pool in Singapore vs USA.
With the same equipment I mined the same amount in a USA pool in 12 days as I mined in a Singapore pool in 10 days.
Would miners be connecting to other nodes for the getwork? Would it be dependent on the closest fullnode?
My personal choice is number 3, even if it means slightly less Verium for me. I am honestly in this for the tech. I think that it is important to build the system and make it understandable. And this solution seems the most transparent and easy to visualize.

Hey Guys,

Good comments and questions. Here is some data and modeling to answer some of the questions and show what I'm currently working on. I'm currently trying to derive the mini-block time formula (which is not included in this doc yet as I'm testing different ones, currently static) to minimize time between getting mini-blocks for low hashrate hardware while keeping the orphan rate as low as possible as well. To model the viability of this approach long term. There is also equations for modeling the verium regular blocks as well. https://docs.google.com/spreadsheets/d/1-D2GsWpYmCiWQlKueeG3on6w87UsGjGB0wx1xljff7w/edit?usp=sharing

@effectstocause In reality the litecoin growth rate is not accurate as it includes implicit growth from GPU and ASICS, which presumably we will not have. Additionally it assumes the entire network runs on mobile bandwidth. So this is worst case scenario. I’ll also make a more moderate estimate tab for some imo more realistic projections.

So after much modeling and working through possible implementations. I’ve realized there is a much simpler way to do this that doesn’t alter the chain dynamics at all and just utilizes some of the existing chain data space. Instead of mini-blocks we can mine a special transaction that solves a hash for the next block at a difficulty lower than what is needed for a block but higher than a global minimum. So this in no way changes the procedure of mining it just registers lower difficulty hashes that meet a minimum target as a mining share. This special transaction has only the miners address and the hash they solved. Upon passing a block miners prove all special mined transactions hashes meet difficulty minimum and then this address and difficulty at which the hash is found is used to calculate proportional rewards to that address in future blocks.

Log in to reply