Bible Pay

Archives => Archived Proposals => Topic started by: Rob Andrews on February 10, 2018, 11:38:12 AM

Title: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 10, 2018, 11:38:12 AM
The primary purpose of this proposal is for the approval for us as a community to use Proof-of-Distributed Computing as a major reward algorithm in biblepay, and as a tool to help bust the botnet.

Please read some of the background info on PODC here:
http://wiki.biblepay.org/Distributed_Computing


Block Security
===========

With Proof-of-distributed-computing enabled, I propose that we continue to use the existing consensus algorithm, PoBH (proof-of-bible-hash),  for block security.

This is so our chains integrity can be maintained, and will not be completely trusted to any 3rd party.  We continue to use DGW (Dark Gravity Wave ) for our difficulty algorithm, this helps us stay less prone to 51% attacks (attacks formed by buying out a percent of biblepay on the market).
    The existing POBH-POW algorithm for chain security ensures that we have a structurally sound blockchain that syncs from zero consistently, and that it is easy for the core wallet to detect forks. 
( This is a major concern, and must be taken seriously).

(This is in contrast a common problem with proof-of-stake consensus systems- where hackers have an attack vector potential created by solving blocks on multiple chains, making it hard for the core wallet to recognize which chain has the most work). In POW based chains, it is easy for the core wallet to stay on track, as the chain with the most work is usually a magnitude more than other chains.

I recommend that we keep the existing POBH-POW-heat mining system for block security, and for chain sync consistency. However, we reduce the payments on the POW-Heat side by 90%, so that we starve off the botnet first of all, by raising the PODC rewards up by 90%.

Imo, no botnet is going to choose to Heat Mine for 90% less when they can choose Rosetta mining for 90% more.
(This is proven in practice in all casinos).

I propose that we promote POBH mining only on the controller wallets, since those wallets would perform sending/receiving of biblepay funds, starting/stopping sanctuaries, managing Rosetta CPIDs, CPID association, and also mining on one thread for our block security.

It has been raised as an issue that if we lower rewards on the heat side by 90%, most strong miners will go away thereby leaving our security more vulnerable to an entity with large dormant hashpower, meaning that they will attempt to come in with a quick hash attack, and take over the chain and issue a double spend.

To combat this, I propose that we limit the ability to solve blocks to only Researchers with Active CPIDs in the chain (associated cancer researchers), with existing Magnitude and coins earned in the last superblock, and limit CPIDs from solving only one block per 6 block period.  What this does is ensures that one researcher cannot enable massive hashpower and solve another block in the confirm period duration.  This would require distinct CPIDs to join in-lessening the chances of an attack.  Since the CPIDs must be actively crunching with RAC and magnitude and be in the prior superblock, a horizonal scaling attempt is not possible  (as it will take at least 2 days to ramp up a new CPIDs magnitude and be paid, while the attackers Old cpids will be diminishing in power).  In addition, DGW already adjusts to the current difficulty level, so a brute hashpower attack is very unlikely to win, with our low nonce rule, the attackers difficulty would rise by exponential amounts per solved block, and if any non-hacker CPID solves a block in between (which is highly likely due to the low nonce rule), the attack is not successful.

Security:

In this solution, we require Rosetta CPIDs to actually type in their credentials to become associated with Biblepay.  All other biblepay researchers check the burn transaction in the memory pool, to verify the CPID is actually owned by the owner.  So therefore it is not possible to be rewarded research rewards for any CPID not owned by the owner and verified with the burn transaction.  Once the burn is checked, the researchers signature is set in the chain, and we respect future signed CPID messages from this researcher.  Replay attacks are not possible.

The SanctuaryQuorum votes on the daily Magnitude file.  We have a system in place to ensure only the official CPID magnitudes will pass in the vote (as a net 10% have to vote on the correct hashed contract) in order for the contract to be recognized as valid for the daily superblock.

In addition, the research payments are constructed in a way that divides the magnitudes by the superblock available budget, so if a Rosetta error would occur, the payment error would be relative to that superblocks budget, and would not result in a chain block overpayment (IE we never pay more than the total budget per day) and we divide the rewards by the share of each researchers weight per day.

Unlimited Scalability:

The daily superblock reward system was chosen so that biblepay may scale to allow up to 32767 researchers to be paid concurrently per day.  Since there are currently 48,000 Rosetta servers actively crunching, this system could accomodate every Rosetta CPID as a potential biblepay user!  We also do not require the researcher to quit their team, but through our advanced security, they may come onboard as a loyal biblepay user.  This results in great PR for biblepay.


In summary, this is what I am proposing:

Proof-Of-Work rewards drop to approx 600 BBP per block (a reduction of 90%)
Proof-Of-Distributed-Computing rewards increase to approx 5500 BBP per block (an increase of 90%)
One superblock per day airdrops the researcher rewards, allowing up to 32767 research payments per day


Theory to bust the botnet:

Since POW rewards would decrease by 90%, the botnet would quit heat mining and start cancer mining
POW would require a signed CPID to solve a block
POW would require a CPID to be active and have prior payments to solve a block
Biblepay would have a feature, if no CPID is available in 31 minutes to solve a block, we would allow any heat miner to solve the block (so chain never gets frozen, in case of Rosetta being down etc)
These rules would lower the chances of an outside 51% attack to almost zero, when combined with DGW's protection

I recommend POL to be disabled for now (proof-of-loyalty) and that we instead research ways to include an element of POL in each Rosetta work unit to raise the trust and integrity level of Proof-of-distributed-computing to the highest level for the masses. 
^ If POL is trusted as a sole consensus algorithm, then PODC+POL may be a potential global leading candidate in the future.

PODC also will allow us to achieve a great future milestone:  Mining for the unbanked.  Since Rosetta cancer mining supports ARM tablets and ARM cell phones, it is possible for the unbanked through cancer research to earn enough to live on or at least buy groceries in third world countries.  This is more than we currently have, since POBH currently runs on PCs only.


The financial renumeration for this proposal is only 10 BBP, as our budget is mostly consumed.

Please vote for this if you believe this will not only allow us to be a humanitarian force - one facet of our community is doing what Jesus would do : help heal others, but also if you believe we will help remove the influence of the botnet and take our community back.



Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 10, 2018, 12:43:26 PM
I oppose this proposal, I have given numerous reasons in the test net forum and on the BitcoinTalk forum.

In summary; my biggest concerns are as follows:

Less CPU protecting the network:

There is a finite amount of CPU resource currently working in order to protect the Bible Pay block chain by mining Bible Pay, the new algorithm for Rosetta@Home will not protect the block chain but would still get 90% of the current PoW rewards.
It follow that in a rational economy 90% of the current CPU resources will towards Rosetta@Home, therefor 90% of the CPU resources protecting the block chain will no longer be protecting the block chain.
It follows that a 51%-attack which requires 51% of the mining capacity will become 10 times less, making the amount of CPU power required for someone to launch such an attack 10 times less.

Effectively making it 10 times as easy (and therefor 10 times as inexpensive) to launch a 51% attack against Bible Pay.

Centralisation:

Rosetta@Home is a organisation to further scientific research, their point system is designed to be a novel way to gamify the donation of computing resources for scientific research. These points were never designed to hold any value, the people managing these points do not have protocol in place against black mail, corruption or fraud.
Nor has the organisation of Rosetta@Home accepted any responsibility for holding this position of trust.


Considering these facts I purpose we look in to different solutions for the problems currently being faced.
We cannot risk the network and in extension thereby the continious donations to orphans due to these facts.

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 10, 2018, 02:03:25 PM
I oppose this proposal, I have given numerous reasons in the test net forum and on the BitcoinTalk forum.

In summary; my biggest concerns are as follows:

Less CPU protecting the network:

There is a finite amount of CPU resource currently working in order to protect the Bible Pay block chain by mining Bible Pay, the new algorithm for Rosetta@Home will not protect the block chain but would still get 90% of the current PoW rewards.
It follow that in a rational economy 90% of the current CPU resources will towards Rosetta@Home, therefor 90% of the CPU resources protecting the block chain will no longer be protecting the block chain.
It follows that a 51%-attack which requires 51% of the mining capacity will become 10 times less, making the amount of CPU power required for someone to launch such an attack 10 times less.

Effectively making it 10 times as easy (and therefor 10 times as inexpensive) to launch a 51% attack against Bible Pay.

Centralisation:

Rosetta@Home is a organisation to further scientific research, their point system is designed to be a novel way to gamify the donation of computing resources for scientific research. These points were never designed to hold any value, the people managing these points do not have protocol in place against black mail, corruption or fraud.
Nor has the organisation of Rosetta@Home accepted any responsibility for holding this position of trust.


Considering these facts I purpose we look in to different solutions for the problems currently being faced.
We cannot risk the network and in extension thereby the continious donations to orphans due to these facts.


This comment must be marked as FUD since it is not true.

In this proposal, only CPIDs can mine, in contrast to everyone.  Therefore the hash rate ratio is not 10* less, it is equal to one signed boinc instance CPID per block, making it far more fair than 12,000 random hashers who can hit us at any time randomly and pump n dump at will.

To protect us against pump-n-dump, no single CPID can solve another block within 5 blocks.

This idea is even more secure than POL, since there is not a risk of buying the coins on the open market.

It is also more secure than the bitcoin status quo, which allows supermajority ASICs groups to form.  (Because the equivalent effect in bitcoin would be to require miners to register on the network, and provide credentials per block, and deny a duplicate bitcoin block to be solved by the same mining organization).  Which would be an improvement over what they have now (an unfair supermajority and most full nodes offline).

FUD.  UNTRUE.

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: T-Mike on February 10, 2018, 02:52:37 PM
I oppose this proposal, I have given numerous reasons in the test net forum and on the BitcoinTalk forum.

In summary; my biggest concerns are as follows:

Less CPU protecting the network:

There is a finite amount of CPU resource currently working in order to protect the Bible Pay block chain by mining Bible Pay, the new algorithm for Rosetta@Home will not protect the block chain but would still get 90% of the current PoW rewards.
It follow that in a rational economy 90% of the current CPU resources will towards Rosetta@Home, therefor 90% of the CPU resources protecting the block chain will no longer be protecting the block chain.
It follows that a 51%-attack which requires 51% of the mining capacity will become 10 times less, making the amount of CPU power required for someone to launch such an attack 10 times less.

Effectively making it 10 times as easy (and therefor 10 times as inexpensive) to launch a 51% attack against Bible Pay.

Centralisation:

Rosetta@Home is a organisation to further scientific research, their point system is designed to be a novel way to gamify the donation of computing resources for scientific research. These points were never designed to hold any value, the people managing these points do not have protocol in place against black mail, corruption or fraud.
Nor has the organisation of Rosetta@Home accepted any responsibility for holding this position of trust.


Considering these facts I purpose we look in to different solutions for the problems currently being faced.
We cannot risk the network and in extension thereby the continious donations to orphans due to these facts.

Why do you think Rob's method would not work. It seems like the amount of computing power wouldn't matter after the safeguards are in place. I am new to this so I am learning from you guys as I go along.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: T-Mike on February 10, 2018, 03:14:38 PM
Rob, it seems like an attack could take place when Rosetta is down since you would let any miner mine on the blockchain. Is there a way to fix that?
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 11, 2018, 07:56:18 AM
Rob, it seems like an attack could take place when Rosetta is down since you would let any miner mine on the blockchain. Is there a way to fix that?

I thought about the rule and realized we have to make a distinction between Rosetta going down and Rosetta rewards stopping vs Heat mining ramifications.  Let me coin the term "DR" (Disaster Recovery mode, when Rosetta is down), which btw is probably only going to happen once every 2 years for one day - its if they bring their network down for upgrades, anyway, let us assume we are in DR.

In DR mode, this is 24 hours after Rosetta stops accepting work units, our SanctuaryQuorum will not come to a consensus.  They will all have a filehash of 0x0, and will not vote for a consensus.  This means when the superblock hits, it will reward 0 Research payments.  Heat mining will continue however.

In DR mode, our CPID signature rule will still be in effect, for existing CPIDs.  So the security is still there, because CPID DCC's are still signed in the chain.  So really nothing changes (except researchers are not getting paid Daily Research Payments, everyone is just mining for 600 BBP heat rewards).  This is because the wallet still knows the existing magnitude, prior payments, and signed cpids, so they can keep heat mining (The rule is written to go back to the *last* DC superblock that was actually Paid), hence the reason it is going to always access the last researcher set (for heat mining rules).

Even if we were in DR mode for 6 months, and lets assume the wallet loses records of all signed CPIDs, we would then revert to 30 minute blocks (as the blocks would Lag because the wallet is trying to enforce the CPID rules), but there would not truly be a "security emergency", instead it would be as it is now:  random researchers hashing with DGW as our diff algorithm.  It would be slightly less secure but by then we would be issuing a mandatory upgrade to fix whatever broke down in PODC (Maybe entire BOINC network upgraded the protocol etc).

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: T-Mike on February 11, 2018, 10:45:17 AM
I thought about the rule and realized we have to make a distinction between Rosetta going down and Rosetta rewards stopping vs Heat mining ramifications.  Let me coin the term "DR" (Disaster Recovery mode, when Rosetta is down), which btw is probably only going to happen once every 2 years for one day - its if they bring their network down for upgrades, anyway, let us assume we are in DR.

In DR mode, this is 24 hours after Rosetta stops accepting work units, our SanctuaryQuorum will not come to a consensus.  They will all have a filehash of 0x0, and will not vote for a consensus.  This means when the superblock hits, it will reward 0 Research payments.  Heat mining will continue however.

In DR mode, our CPID signature rule will still be in effect, for existing CPIDs.  So the security is still there, because CPID DCC's are still signed in the chain.  So really nothing changes (except researchers are not getting paid Daily Research Payments, everyone is just mining for 600 BBP heat rewards).  This is because the wallet still knows the existing magnitude, prior payments, and signed cpids, so they can keep heat mining (The rule is written to go back to the *last* DC superblock that was actually Paid), hence the reason it is going to always access the last researcher set (for heat mining rules).

Even if we were in DR mode for 6 months, and lets assume the wallet loses records of all signed CPIDs, we would then revert to 30 minute blocks (as the blocks would Lag because the wallet is trying to enforce the CPID rules), but there would not truly be a "security emergency", instead it would be as it is now:  random researchers hashing with DGW as our diff algorithm.  It would be slightly less secure but by then we would be issuing a mandatory upgrade to fix whatever broke down in PODC (Maybe entire BOINC network upgraded the protocol etc).

Ok, I understand mostly now, there might be a corner condition somewhere that we might not have though of but we will keep pondering. Another question is this, what prevents someone with write access to the Rosetta database to fake the information required by Biblepay ro calcualte the magnitudes and fool the safeguards?
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 11, 2018, 10:52:21 AM
Ok, I understand mostly now, there might be a corner condition somewhere that we might not have though of but we will keep pondering. Another question is this, what prevents someone with write access to the Rosetta database to fake the information required by Biblepay ro calcualte the magnitudes and fool the safeguards?

The people at Rosetta dont calculate magnitudes.  They only approve or deny single work units.

Its up to Boinc to figure out how many cobblestones went into the workunit, using the host info, the duration, the video coprocs, and they even have "checks" where they check in on the work unit every X seconds and log info about each timeslice so as to log the exact cobblestone rate.

You have to fool Rosetta And boinc.  And boinc is not so easy to fool, as its distributed.

In addition, boinc does not give us the magnitude, it gives us total credit delta.  We subtract yesterday from today to find the delta.  RAC is a decay function.  We base Magnitude on RAC.   One can reverse engineer RAC from Total Credit, so those numbers cant just be manipulated (IE if someone jacks up Total Credit without RAC, RAC is wrong).

The answer to your question though is really, what if the admin of Rosetta goes in and validates all of one researchers workunits without work being performed?  Then the only way to detect that is to write pool reports comparing our reference machine to each researcher.  That is the only idea I have for that, otherwise, we have to trust Rosettas SQL database per workunit result.



Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: T-Mike on February 11, 2018, 11:24:10 AM
The people at Rosetta dont calculate magnitudes.  They only approve or deny single work units.

Its up to Boinc to figure out how many cobblestones went into the workunit, using the host info, the duration, the video coprocs, and they even have "checks" where they check in on the work unit every X seconds and log info about each timeslice so as to log the exact cobblestone rate.

You have to fool Rosetta And boinc.  And boinc is not so easy to fool, as its distributed.

In addition, boinc does not give us the magnitude, it gives us total credit delta.  We subtract yesterday from today to find the delta.  RAC is a decay function.  We base Magnitude on RAC.   One can reverse engineer RAC from Total Credit, so those numbers cant just be manipulated (IE if someone jacks up Total Credit without RAC, RAC is wrong).

The answer to your question though is really, what if the admin of Rosetta goes in and validates all of one researchers workunits without work being performed?  Then the only way to detect that is to write pool reports comparing our reference machine to each researcher.  That is the only idea I have for that, otherwise, we have to trust Rosettas SQL database per workunit result.

Thanks for answering my questions, I'll continue asking in the testnet froum.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: jaapgvk on February 13, 2018, 10:29:06 AM
I've read everything that Schwongel and Rob wrote about the subject. I respect Swongels knowledge on the subject and admire the fact that he has a solid opinion.

That being said, I find it extremely difficult to process every little detail that has been given related to PODC and the ramifications it could possible have for the project. My biggest concern is that of the botnet. Of course, decentralization is paramount because we are a cryptocurrency, but most of all, I want this project to succeed in the charitable vision it set out to be. And for that, in my opinion - we need to get rid of the botnet.

When I first found out about this project, I recognized someone with a great vision in Rob, and although I'd like to have more knowledge before making a decision, I'm going with my gut feeling on this, and choose to stand by Rob and whatever his vision is. He made this coin, and if his decision for PODC will ultimately lead to the demise of Biblepay, then that's just how it shall be. So I vote yes :)
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 16, 2018, 02:45:32 AM
Since you won't accept my arguments regarding Cerntralisation or even about lowering the amount of hashes needed, here's a few more:

There's ASICS doing protein simulations / folding already:
https://en.wikipedia.org/wiki/Anton_(computer) (https://en.wikipedia.org/wiki/Anton_(computer))

BOINC themselves are talking about "reducing the likelyhood of results and credit falsification" signifying that this is a problem that cannot be solved but is merely counteracted in a patchy way:
https://boinc.berkeley.edu/trac/wiki/SecurityIssues

GridCoin isn't considered a good implmentation of crypto by many hackers (not the evil kind):
https://news.ycombinator.com/item?id=8962896 (https://news.ycombinator.com/item?id=8962896)

Additionally, If a 51% attack is successfully executed (which will be much easier but even if you don't consider to be true), one can prevent new CPID's from joining the network simply by not allowing any CPID announcing transactions in to the network, there would be no incentive for miners to mine CPID announcing transactions other than for the good of the network (which isn't reliable enough for crypto).

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: jaapgvk on February 17, 2018, 07:57:07 AM
With the addition of Swongels latest arguments, and the questions on bitcointalk from investors, maybe it's best to go about this implementation slow and steady, since it's such a big step in development. I really hope more people will give their opinion on this implementation.

I'm not a programmer, nor am I an expert on blockchain technology, and I'm sorry that I can't give more input on that side, but in the end I do want this community to flourish.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 18, 2018, 08:19:15 AM
With the addition of Swongels latest arguments, and the questions on bitcointalk from investors, maybe it's best to go about this implementation slow and steady, since it's such a big step in development. I really hope more people will give their opinion on this implementation.

I'm not a programmer, nor am I an expert on blockchain technology, and I'm sorry that I can't give more input on that side, but in the end I do want this community to flourish.

There are no questions on bitcointalk from investors.  Burito is not an investor and that would be "singular".

I think what we have in the PODC testing room is better than we have in prod, since the status quo, has let us down, and we are sharing 93% of our emission with a botnet.  Id rather start by sharing it with 2000 boinc network cancer researchers, and making a major effort to ensure the security stays.

As I said once, it would be better to be hacked once a day inside Rosetta than we have currently.

I will say this:  the latest post from Swongle is 70% accurate this time, so I will not delete it, instead I will ask him if hes willing to help us make PODC the defacto standard, a highly secure consensus algorithm for blockchains, this way Biblepay could address the remaining 1% concern that he posts. 

I am dissapointed at the last part of the post, about his biased view of 51% attacks however.  If we are going to talk about this, we need to be neutral, and not spread FUD.  He knows that every coin is subject to 51% attack risk, and disregards the fact that a limited subset of miners with DGW in front of it is more secure than the unfair supermajority existing in bitcoin today.  Its redicules, to make those assertions and be taken seriously.


Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 18, 2018, 08:22:53 AM
Since you won't accept my arguments regarding Cerntralisation or even about lowering the amount of hashes needed, here's a few more:

There's ASICS doing protein simulations / folding already:
https://en.wikipedia.org/wiki/Anton_(computer) (https://en.wikipedia.org/wiki/Anton_(computer))

BOINC themselves are talking about "reducing the likelyhood of results and credit falsification" signifying that this is a problem that cannot be solved but is merely counteracted in a patchy way:
https://boinc.berkeley.edu/trac/wiki/SecurityIssues

GridCoin isn't considered a good implmentation of crypto by many hackers (not the evil kind):
https://news.ycombinator.com/item?id=8962896 (https://news.ycombinator.com/item?id=8962896)

Additionally, If a 51% attack is successfully executed (which will be much easier but even if you don't consider to be true), one can prevent new CPID's from joining the network simply by not allowing any CPID announcing transactions in to the network, there would be no incentive for miners to mine CPID announcing transactions other than for the good of the network (which isn't reliable enough for crypto).

First regarding the asic:
Thats irrelevant.  ASICs are designed to perform specific tasks.  It does not mean Rosetta could run in an asic in 20 years. They have 150 paid scientists, and the code is too complicated to even port it to GPU.  Low risk, not relevant.

Ill address the others as we go.

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 18, 2018, 09:11:07 AM
There are no questions on bitcointalk from investors.  Burito is not an investor and that would be "singular".

I think what we have in the PODC testing room is better than we have in prod, since the status quo, has let us down, and we are sharing 93% of our emission with a botnet.  Id rather start by sharing it with 2000 boinc network cancer researchers, and making a major effort to ensure the security stays.

As I said once, it would be better to be hacked once a day inside Rosetta than we have currently.

I will say this:  the latest post from Swongle is 70% accurate this time, so I will not delete it, instead I will ask him if hes willing to help us make PODC the defacto standard, a highly secure consensus algorithm for blockchains, this way Biblepay could address the remaining 1% concern that he posts. 

I am dissapointed at the last part of the post, about his biased view of 51% attacks however.  If we are going to talk about this, we need to be neutral, and not spread FUD.  He knows that every coin is subject to 51% attack risk, and disregards the fact that a limited subset of miners with DGW in front of it is more secure than the unfair supermajority existing in bitcoin today.  Its redicules, to make those assertions and be taken seriously.

Yes 51% attacks exists in any coin, only you propose to make the required CPU-power only 5.1% by asking 90% of the cycles to go to non-direct blockchain related workloads. Don't patronise me, I know very well what I am talking about, you might disagree with me but that doensn't make you right. I will help you with PODC by telling you, don't implement PODC in this way, and I have told you this often with valid reasons.
 
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 18, 2018, 09:18:30 AM
Yes 51% attacks exists in any coin, only you propose to make the required CPU-power only 5.1% by asking 90% of the cycles to go to non-direct blockchain related workloads. Don't patronise me, I know very well what I am talking about, you might disagree with me but that doensn't make you right. I will help you with PODC by telling you, don't implement PODC in this way, and I have told you this often with valid reasons.

No sir, you dont know what you are talking about, because you are confusing reward levels with decreased security, while security has a 1:1 relationship to how much hash power is supplied against the front line network at a given time, and you continue to disregard DGW.  Do you have any experience with any of the top 50 cryptos, such as having multiple developers working for you?

Dont mislead our investors, and dont snap back and speak to me that way.  You've been warned.

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 18, 2018, 09:34:11 AM
No sir, you dont know what you are talking about, because you are confusing reward levels with decreased security, while security has a 1:1 relationship to how much hash power is supplied against the front line network at a given time, and you continue to disregard DGW.  Do you have any experience with any of the top 50 cryptos, such as having multiple developers working for you?

Dont mislead our investors, and dont snap back and speak to me that way.  You've been warned.

"Do you have any experience with any of the top 50 cryptos, such as having multiple developers working for you?"
No, neither do you.

Stay classy Rob.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 18, 2018, 09:46:00 AM
"Do you have any experience with any of the top 50 cryptos, such as having multiple developers working for you?"
No, neither do you.

Stay classy Rob.


I do.



Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 18, 2018, 09:53:46 AM

I do.

Well please enlighten me with which top 50 crypto project you have been working and what your contribution has been, which of their developers did you work with?
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 18, 2018, 09:59:54 AM
Well please enlighten me with which top 50 crypto project you have been working and what your contribution has been, which of their developers did you work with?

No.  I don't lie, and this is not about me.  Stick to the subject, and admit you were wrong about the 51% vector.
We have 12,000 miners now mining 202 blocks per day.
In the future, we promote CPID mining.
We will have 1000 miners (those are people that have access to the CPID signature), mining on controller wallets.
A reduction of 90% of the hashpower, means the POW difficulty will drop to Exactly its average.

All additional hashpower requires a SIGNED cpid.  With magnitude.  Meaning there is No random hashpower, which means we have a Decrease in volatility to the coin, and hence its risk. 

Therefore your analysis for a 51% above is incorrect.


Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 18, 2018, 10:26:56 AM
No.  I don't lie, and this is not about me.  Stick to the subject, and admit you were wrong about the 51% vector.
We have 12,000 miners now mining 202 blocks per day.
In the future, we promote CPID mining.
We will have 1000 miners (those are people that have access to the CPID signature), mining on controller wallets.
A reduction of 90% of the hashpower, means the POW difficulty will drop to Exactly its average.

All additional hashpower requires a SIGNED cpid.  With magnitude.  Meaning there is No random hashpower, which means we have a Decrease in volatility to the coin, and hence its risk. 

Therefore your analysis for a 51% above is incorrect.

Yes and when 90% is off mining Rosetta@Home by simply sharing a single CPID with magnitude the botnet will mine with full capacity simply funneling their hash power through a single CPID giving them a huge advantage.

Yes the dificulty will drop indeed, but that's my whole argument, lower dificulty = easier for bot net to mine. Even with this "DGW" thich will do nothing but make the botnet have a single CPID. They could also just use CPID of other accoutnts and use that in their block, the blocks would still be valid; they wouldn't get a reward but they'd still only need 5.1% (relative to current hashrates) to launch a double spend.

I won't concede my argument, because it's a good argument, it is not an argument from authority, it is not a fallacy it is math.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 18, 2018, 10:34:51 AM
Yes and when 90% is off mining Rosetta@Home by simply sharing a single CPID with magnitude the botnet will mine with full capacity simply funneling their hash power through a single CPID giving them a huge advantage.

Yes the dificulty will drop indeed, but that's my whole argument, lower dificulty = easier for bot net to mine. Even with this "DGW" thich will do nothing but make the botnet have a single CPID. They could also just use CPID of other accoutnts and use that in their block, the blocks would still be valid; they wouldn't get a reward but they'd still only need 5.1% (relative to current hashrates) to launch a double spend.

I won't concede my argument, because it's a good argument, it is not an argument from authority, it is not a fallacy it is math.

Ok conversation has improved slightly, thanks.

This vector is not possible because the Botnet would funnel all power to one cpid, then after they solve block #1, they will be unable to solve block 2,3,4, or 5.  (Because we have a rule in now that enforces Distinct CPIDs per set of 5)....   There will be a one in 1000 chance for each distinct researcher to jump in and solve block #2, partially because we limit each individual miner to 250 hashes per second (in prod now).  Yes I agree that botnet could then switch to CPID #2, but that  would raise difficulty up (as it is now, to 2500) choking themselves, and as I mention, our low nonce rule is Global:  Its not per machine - so You or anyone globally cannot solve a block at a rate of more than 250 HPS.  Meaning that each and every block gives the other 1000 participants a very high chance, a higher chance than your run of the mill crypto to solving that block.

Low Nonce + DGW + More private hashing = Lower volatility

It is a true statement that if a botnet were to attempt to share CPIDs, they would no longer be able to carry a 93% domination level....  Because we require distinct CPIDs per set of 5..... (CPIDs with magnitude....)

Bottom line is you should realize the setup here:  this ecosystem is safer than bitcoin...  in regards to 51% attacks...

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 18, 2018, 10:55:39 AM
Ok conversation has improved slightly, thanks.

This vector is not possible because the Botnet would funnel all power to one cpid, then after they solve block #1, they will be unable to solve block 2,3,4, or 5.  (Because we have a rule in now that enforces Distinct CPIDs per set of 5)....   There will be a one in 1000 chance for each distinct researcher to jump in and solve block #2, partially because we limit each individual miner to 250 hashes per second (in prod now).  Yes I agree that botnet could then switch to CPID #2, but that  would raise difficulty up (as it is now, to 2500) choking themselves, and as I mention, our low nonce rule is Global:  Its not per machine - so You or anyone globally cannot solve a block at a rate of more than 250 HPS.  Meaning that each and every block gives the other 1000 participants a very high chance, a higher chance than your run of the mill crypto to solving that block.

Low Nonce + DGW + More private hashing = Lower volatility

It is a true statement that if a botnet were to attempt to share CPIDs, they would no longer be able to carry a 93% domination level....  Because we require distinct CPIDs per set of 5..... (CPIDs with magnitude....)

Bottom line is you should realize the setup here:  this ecosystem is safer than bitcoin...  in regards to 51% attacks...

So they'll need 5 CPIDs not really solving the problem just making it a little harder. Even still they could just announce other people their CPIDs.
Also you cannot limit hashes/s, only valid blocks get announced therefor hashrates are not public knowledge, changing that 250 to a 250000 is trivially easy.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 18, 2018, 03:25:38 PM
So they'll need 5 CPIDs not really solving the problem just making it a little harder. Even still they could just announce other people their CPIDs.
Also you cannot limit hashes/s, only valid blocks get announced therefor hashrates are not public knowledge, changing that 250 to a 250000 is trivially easy.

Im flabbergasted that Im still talking to you.  Your credibility is really close to zero.

First, they cant announce their CPIDs, because you cant sign a CPID unless you own it.  So thats incorrect.  They can just share 5 across 10,000 machines and watch for the last used cpid.  It gets them nowhere.  Why would it?  Its just the legal way to heat mine across 5 cpids? So what.  Its not going to last as the reward is too low.  Its not a 51% attack vector!  You dont even know what that means.

Next, if we have the 12000 machines from the botnet sharing the 5 cpids, now what we have done is limited those 12000 machines to 1/5th the size in hashing power, (because only one botnet machine can solve a block every 5 blocks).  Which, will in itself not be sustainable for long periods, becuse the reward is too low, but even if it did, all that would do is lower our botnet average difficulty to 1000 from 5000.  What are they achieving?  Nothing.   But I just proved it *increased* security vs. the bitcoin model.  (With your own example).

You said I cant limit hashes?  Incorrect.  We do limit hashes in prod.  A block hash can only change with a timestamp or a nonce change.  Its more work to create a new block than update its hash (so much more work, that its not worth recreating a block for every hash change).  So now you are limited to 250 nonces per second, Or a timestamp change.  We have a timestamp limiter in the code, meaning that is not a vector.  So yes, we do limit the hashes per second.   


Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 18, 2018, 04:05:45 PM
Im flabbergasted that Im still talking to you.  Your credibility is really close to zero.

First, they cant announce their CPIDs, because you cant sign a CPID unless you own it.  So thats incorrect.  They can just share 5 across 10,000 machines and watch for the last used cpid.  It gets them nowhere.  Why would it?  Its just the legal way to heat mine across 5 cpids? So what.  Its not going to last as the reward is too low.  Its not a 51% attack vector!  You dont even know what that means.

Next, if we have the 12000 machines from the botnet sharing the 5 cpids, now what we have done is limited those 12000 machines to 1/5th the size in hashing power, (because only one botnet machine can solve a block every 5 blocks).  Which, will in itself not be sustainable for long periods, becuse the reward is too low, but even if it did, all that would do is lower our botnet average difficulty to 1000 from 5000.  What are they achieving?  Nothing.   But I just proved it *increased* security vs. the bitcoin model.  (With your own example).

You said I cant limit hashes?  Incorrect.  We do limit hashes in prod.  A block hash can only change with a timestamp or a nonce change.  Its more work to create a new block than update its hash (so much more work, that its not worth recreating a block for every hash change).  So now you are limited to 250 nonces per second, Or a timestamp change.  We have a timestamp limiter in the code, meaning that is not a vector.  So yes, we do limit the hashes per second.   

Because the person mining the block can decide what transactions can be in it, he can just fork off the main branch and undo transactions... The whole point of a 51% attack. So even if he couldn't get his hand on 5 CPID's which is trivially easy.

So the goal of the attacker isn't to get those coins, it's the goal of the attacker to race the main branch so they can un do transactions and thus double spend their coins. I know very well what a 51% attack is thank you very much.

Further more shuffeling around transactions every 250 hashes isn't a limitation, it's merely a simple shuffle attackers could easily create just 100 transactions each block to shuffle around a bit, thus changing the outcoming hash (even without being able to change the nonce).

I don't know why I'm even arguing with you either, it's not like you're going to listen. Maybe you should ask a security expert about this stuff, I would be dumbfounded if there's any security expert willing to go on record saying that this is even remotely safe for a crypto currency.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 19, 2018, 07:24:23 AM
Because the person mining the block can decide what transactions can be in it, he can just fork off the main branch and undo transactions... The whole point of a 51% attack. So even if he couldn't get his hand on 5 CPID's which is trivially easy.

So the goal of the attacker isn't to get those coins, it's the goal of the attacker to race the main branch so they can un do transactions and thus double spend their coins. I know very well what a 51% attack is thank you very much.

Further more shuffeling around transactions every 250 hashes isn't a limitation, it's merely a simple shuffle attackers could easily create just 100 transactions each block to shuffle around a bit, thus changing the outcoming hash (even without being able to change the nonce).

I don't know why I'm even arguing with you either, it's not like you're going to listen. Maybe you should ask a security expert about this stuff, I would be dumbfounded if there's any security expert willing to go on record saying that this is even remotely safe for a crypto currency.

Unfortunately you havent proven anything new here, and most of this is incorrect.

A 51% attack would require more than 51% of the CPIDs, that was the statement I wanted you to make and you didnt.  Its literally impossible.

They cant fork off to another branch because of setbestchain, no client will follow that branch.

You discount the hardness to maintain a CPID with RAC.  Its not easy, try it, it takes a full node 1 day of work to get enough rac to be in the block.

Shuffling 250 transactions, Im sorry, wrong terminology, not possible, not related to what is in our code.  We deliberately only have left two fields in the getblockhash that can change the hash: nonce and timestamp.  Timestamp has an allowable window of 5 minutes in either direction from getadjustedtime, meaning we only allow 300 values (in seconds).  Nonce is limited (CheckNonce()) to 250 HPS globally.  You cannot submit a block to our network with a nonce > 251 for a block that is less than 61 seconds old.  Transactions have nothing to do with it.  This is indeed a security feature, and affects the conversation by 90%.  This alone + DGW prevents a 51% attack.  Were basically allow 99% of the hashes to pass through biblepay with no effect when they exceed the noncelimit.  To discount this effect to zero would be like ignoring the cornerstone of the building, its absolutely relevant as part of the base calculation for a 51% attack.

Regarding security expert, I am still working with Martin, but any real security expert is invited to take a look at the POW code, Im confident we are much more secure than the average POW implementation with the CPID restriction and the nonce limit in place, but this person will have to admit the weaknesses inherent in POW in the first place and make an honest comparison, not one that is biased towards removing the credibility for the other facets of PODC.  I admitted where the risks were in PODC, and Im basically telling everyone, we know the SQL database inside Rosetta is the weak point.  Lets find a way to create a tamperproof pill bottle for it, and make it more reliable.  I continue to say, I would rather live with Rosettas quirks than the current bitcoin status quo where 93% of the heat miners are now hogging the rewards and we have no control over upgrades.  Its a disaster, whereas PODCs environment is quite favorable.

Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Swongel on February 19, 2018, 10:14:39 AM
Unfortunately you havent proven anything new here, and most of this is incorrect.

A 51% attack would require more than 51% of the CPIDs, that was the statement I wanted you to make and you didnt.  Its literally impossible.

They cant fork off to another branch because of setbestchain, no client will follow that branch.

You discount the hardness to maintain a CPID with RAC.  Its not easy, try it, it takes a full node 1 day of work to get enough rac to be in the block.

Shuffling 250 transactions, Im sorry, wrong terminology, not possible, not related to what is in our code.  We deliberately only have left two fields in the getblockhash that can change the hash: nonce and timestamp.  Timestamp has an allowable window of 5 minutes in either direction from getadjustedtime, meaning we only allow 300 values (in seconds).  Nonce is limited (CheckNonce()) to 250 HPS globally.  You cannot submit a block to our network with a nonce > 251 for a block that is less than 61 seconds old.  Transactions have nothing to do with it.  This is indeed a security feature, and affects the conversation by 90%.  This alone + DGW prevents a 51% attack.  Were basically allow 99% of the hashes to pass through biblepay with no effect when they exceed the noncelimit.  To discount this effect to zero would be like ignoring the cornerstone of the building, its absolutely relevant as part of the base calculation for a 51% attack.

Regarding security expert, I am still working with Martin, but any real security expert is invited to take a look at the POW code, Im confident we are much more secure than the average POW implementation with the CPID restriction and the nonce limit in place, but this person will have to admit the weaknesses inherent in POW in the first place and make an honest comparison, not one that is biased towards removing the credibility for the other facets of PODC.  I admitted where the risks were in PODC, and Im basically telling everyone, we know the SQL database inside Rosetta is the weak point.  Lets find a way to create a tamperproof pill bottle for it, and make it more reliable.  I continue to say, I would rather live with Rosettas quirks than the current bitcoin status quo where 93% of the heat miners are now hogging the rewards and we have no control over upgrades.  Its a disaster, whereas PODCs environment is quite favorable.

If you're not hashing transactions; then how can you even verify the transactions within an announced blocked were actually mined and not just announced by someone using the same hash? If you don't hash the transactions all your block chain is getting consensus about is the timestamp and nonce... except that in the current source the hashMerkleRoot also get's thrown in to the block, which  can be changed by shuffeling/adding/removing  transactions.

51% of CPIDs isn't required, unless magnitude constitutes discounted hashrate, in which case you'll have PoL, in which case you'll get the same problems as with PoL;  just build up magnitude and spend it all at once to get a row of blocks.

Furthermore, the "best chain" is the longest chain, the whole point of a 51% attack is to become the best chain by outpacing the other chain.
Title: Re: Proposal: Proof-Of-Distributed Computing as major reward algorithm
Post by: Rob Andrews on February 19, 2018, 10:27:41 AM
If you're not hashing transactions; then how can you even verify the transactions within an announced blocked were actually mined and not just announced by someone using the same hash? If you don't hash the transactions all your block chain is getting consensus about is the timestamp and nonce... except that in the current source the hashMerkleRoot also get's thrown in to the block, which  can be changed by shuffeling/adding/removing  transactions.

51% of CPIDs isn't required, unless magnitude constitutes discounted hashrate, in which case you'll have PoL, in which case you'll get the same problems as with PoL;  just build up magnitude and spend it all at once to get a row of blocks.

Furthermore, the "best chain" is the longest chain, the whole point of a 51% attack is to become the best chain by outpacing the other chain.

Swongel,

You cant reshuffle the block without performing 100* the work of the biblehash.  The biblehash is intensive, but a PC can generate 1000 biblehashes per second, but cannot regenerate 1000 blocks per second, because of all the BIP checks for the memory pool transactions.  So this is incorrect.

We are hashing all transactions, of course.  Im saying we globally limit the nonce level per second.  In Checkblock.  So you cant get around it, and it does control how much hashing hits the front line activeChain best block.

Yes, 51% of the cpids are required, because we require a distinct signed CPID per block.  Meaning that the equivalent of a 51% hashpower attack requires control of 51% of the signed cpids.  Thats what a 51% attack is:  controlling the ability with the most liklihood to own the chain, and that would require 51% of the cpids.  Take a look at the requirements of a POW 51% attack before making assumptions that I am incorrect.

The reason a CPID with N magnitude does not need to equate to Y hashes per second for the CPIDs corresponding heat miners : is because one solved block with massive hashpower at that instance does not move the attacker closer to controlling our chain.  It moves them 1/CPID(Count) closer.  Which means my statement above, requiring you to control 51% of the cpids remains true.  Therefore the statement about POL is ignorant- POL is not necessary for this topic.  I am strictly addressing the inaccuracy of your original post: that this PODC implementation lowers security.  My response is:  No, it Raises security.

Higher magnitude does not give you a higher chance in solving blocks.  We require "A CPID with Magnitude" in each block, to solve the block at the current POW level (there are now two in the coin: PODC diff and POW diff).  So that is an incorrect statement that you can save up magnitude.  Your magnitude rewards come from the superblock daily, your heat rewards are what you receive when you solve a POW block.

Incorrect on the best chain.  The Best chain is the chain with the most chainwork.  See our chainwork algorithm.  The one in your attack with the alternate transactions has less chainwork, and is therefore not the best chain.  Thats why we stuck with POW for this implementation, to allow the core to follow the best chain easily and avoid the fork problems that *could be possible nuisances* with alternatives such as POS.