What Are Invertible Bloom Lookup Tables? - Dash News

The Origins of the Blocksize Debate

On May 4, 2015, Gavin Andresen wrote on his blog:
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
On May 5 (1 day after Gavin submitted his first blog post), Mike Hearn published The capacity cliff on his Medium page. 2 days later, he posted Crash landing. In these posts, he argued:
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature[1][2]:
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.
Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. [1][2][3][4][5][6][7]
Matt Corallo brought Gavin's proposal up on the bitcoin-dev mailing list after a few days. He wrote:
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
Shortly thereafter, Corallo explained further:
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.
Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
Peter Todd mentioned that a limited blocksize provides the benefit of protecting against the "perverse incentives" behind potential block withholding attacks.
Slush didn't have a strong opinion one way or the other, and neither did Eric Lombrozo, though Eric was interested in developing hard-fork best practices and wanted to:
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.
Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.
Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
Gregory Maxwell echoed and extended that perspective:
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
Peter Todd also summarized some academic findings on the subject:
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Pieter Wuille said:
I am - in general - in favor of increasing the size blocks...
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
Choose wisely.
Mike Hearn responded:
this list is not a good place for making progress or reaching decisions.
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
Jorge Timón wrote an incredibly prescient reply to Mike:
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Mike Hearn's response was to demand a "leader" who could unilaterally steer the Bitcoin project and make decisions unchecked:
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadership
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
Jorge Timón responded:
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
Mike Hearn again asserted the need for a leader:
There must be a single decision maker for any given codebase.
Bryan Bishop attempted to explain why this did not make sense with git architecture.
Finally, Gavin announced his intent to merge the patch into Bitcoin XT to bypass the peer review he had received on the bitcoin-dev mailing list.
submitted by sound8bits to Bitcoin [link] [comments]

GMaxwell in 2006, during his Wikipedia vandalism episode: "I feel great because I can still do what I want, and I don't have to worry what rude jerks think about me ... I can continue to do whatever I think is right without the burden of explaining myself to a shreaking [sic] mass of people."

https://en.wikipedia.org/w/index.php?title=User_talk:Gmaxwell&diff=prev&oldid=36330829
Is anyone starting to notice a pattern here?
Now we're starting to see that it's all been part of a long-term pattern of behavior for the last 10 years with Gregory Maxwell, who has deep-seated tendencies towards:
After examining his long record of harmful behavior on open-source software projects, it seems fair to summarize his strengths and weaknesses as follows:
(1) He does have excellent programming skills.
(2) He likes needs to be in control.
(3) He always believes that whatever he's doing is "right" - even if a consensus of other highly qualified people happen to disagree with him (who he rudely dismisses "shrieking masses", etc.)
(4) Because of (1), (2), and (3) we are now seeing how dangerous is can be to let him assume power over an open-source software project.
This whole mess could have been avoided.
This whole only happened because people let Gregory Maxwell "be in charge" of Bitcoin development as CTO of Blockstream;
The whole reason the Bitcoin community is divided right now is simply because Gregory Maxwell is dead-set against any increase in "max blocksize" even to a measly 2 MB (he actually threatened to leave the project if it went over 1 MB).
This whole problem would go away if he could simply be man enough to step up and say to the Bitcoin community:
"I would like to offer my apologies for having been so stubborn and divisive and trying to always be in control. Although it is still my honest personal belief that that a 1 MB 'max blocksize' would be the best for Bitcoin, many others in the community evidently disagree with me strongly on this, as they have been vehement and unrelenting in their opposition to me for over a year now. I now see that any imagined damage to the network resulting from allowing big blocks would be nothing in comparison to the very real damage to the community resulting from forcing small blocks. Therefore I have decided that I will no longer attempt to force my views onto the community, and I shall no longer oppose a 'max blocksize' increase at this time."
Good luck waiting for that kind of an announcement from GMax! We have about as much a chance of GMax voluntarily stepping down as leader of Bitcoin, as Putin voluntarily stepping down as leader of Russia. It's just not in their nature.
As we now know - from his 10-year history of divisiveness and vandalism, and from his past year of stonewalling - he would never compromise like this, compromise is simply not part of his vocabulary.
So he continues to try to impose his wishes on the community, even in the face of ample evidence that the blocksize could easily be not only 2 MB but even 3-4 MB right now - ie, both the infrastructure and the community have been empirically surveyed and it was found that the people and the bandwidth would both easily support 3-4 MB already.
But instead, Greg would rather use his postion as "Blockstream CTO" to overrule everyone who supports bigger blocks, telling us that it's impossible.
And remember, this is the same guy who a few years ago was also telling us that Bitcoin itself was "mathematically impossible".
So here's a great plan get rich:
(1) Find a programmer who's divisive and a control freak and who overrides consensus and who didn't believe that Bitcoin was possible and and doesn't believe that it can do simple "max blocksize"-based scaling (even in the face of massive evidence to the contrary).
(2) Invest $21+55 million in a private company and make him the CTO (and make Adam Back the CEO - another guy who also didn't believe that Bitcoin would work).
(3) ???
(4) Profit!
Greg and his supporters say bigblocks "might" harm Bitcoin someday - but they ignore the fact that smallblocks are already harming Bitcoin now.
Everyone from Core / Blockstream mindlessly repeats Greg's mantra that "allowing 2 MB blocks could harm the network" - somehow, someday (but actually, probably not: see Footnotes [1], [2], [3], and [4] below).
Meanhwhile, the people who foolishly put their trust in Greg are ignoring the fact that "constraining to 1 MB blocks is harming the community" - right now (ie, people's investments and businesses are already starting to suffer).
This is the sad situation we're in.
And everybody could end up paying the price - which could reach millions or billions of dollars if people don't wake up soon and get rid of Greg Maxwell's toxic influence on this project.
At some point, no matter how great Gregory Maxwell's coding skills may be, the "money guys" behind Blockstream (Austin Hill et al.), and their newer partners such as the international accounting consultancy PwC - and also the people who currently hold $5-6 billion dollars in Bitcoin wealth - and the miners - might want to consider the fact that Gregory Maxwell is so divisive and out-of-touch with the community, that by letting him continue to play CTO of Bitcoin, they may be in danger of killing the whole project - and flushing their investments and businesses down the toilet.
Imagine how things could have been right now without GMax.
Just imagine how things would be right now if Gregory Maxwell hadn't wormed his way into getting control of Bitcoin:
There is a place for everyone.
Talented, principled programmers like Greg Maxwell do have their place on software development projects.
Things would have been fine if we had just let him work on some complicated mathematical stuff like Confidential Transactions (Adam Back's "homomorphic encryption") - because he's great for that sort of thing.
(I know Greg keeps taking this as a "back-handed (ie, insincere) compliment" from me nullc - but I do mean it with all sincerity: I think he have great programming and cryptography skills, and I think his work on Confidential Transactions could be a milestone for Bitcoin's privacy and fungibility. But first Bitcoin has to actually survive as a going project, and it might not survive if he continues insist on tring to impose his will in areas where he's obviously less qualified, such as this whole "max blocksize" thing where the infrastructure and the market should be in charge, not a coder.)
But Gregory Maxwell is too divisive and too much of a control freak (and too out-of-touch about what the technology and the market are actually ready for) to be "in charge" of this software development project as a CTO.
So this is your CTO, Bitcoin. Deal with it.
He dismissed everyone on Wikipedia back then as "shrieking masses" and he dismisses /btc as a "cesspool" now.
This guy is never gonna change. He was like this 10 years ago, and he's still like this now.
He's one of those arrogant C/C++ programmers, who thinks that because he understands C/C++, he's smarter than everyone else.
It doesn't matter if you also know how to code (in C/C++ or some other langugage).
It doesn't matter if you understand markets and economics.
It doesn't matter if you run a profitable company.
It doesn't even matter if you're Satoshi Nakamoto:
Satoshi Nakamoto, October 04, 2010, 07:48:40 PM "It can be phased in, like: if (blocknumber > 115000) maxblocksize = largerlimit / It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete."
https://np.reddit.com/btc/comments/3wo9pb/satoshi_nakamoto_october_04_2010_074840_pm_it_can/
Gregory Maxwell is in charge of Bitcoin now - and he doesn't give a flying fuck what anyone else thinks.
He has and always will simply "do whatever he thinks is right without the burden of explaining himself to you" - even he has to destroy the community and the project in the process.
That's just the kind of person he is - 10 years ago on Wikipedia (when he was just one of many editors), and now (where he's managed to become CTO of a company which took over Satoshi's respository and paid off most of its devs).
We now have to make a choice:
Footnotes:
[1]
If Bitcoin usage and blocksize increase, then mining would simply migrate from 4 conglomerates in China (and Luke-Jr's slow internet =) to the top cities worldwide with Gigabit broadban - and price and volume would go way up. So how would this be "bad" for Bitcoin as a whole??
https://np.reddit.com/btc/comments/3tadml/if_bitcoin_usage_and_blocksize_increase_then/
[2]
"What if every bank and accounting firm needed to start running a Bitcoin node?" – bdarmstrong
https://np.reddit.com/btc/comments/3zaony/what_if_every_bank_and_accounting_firm_needed_to/
[3]
It may well be that small blocks are what is centralizing mining in China. Bigger blocks would have a strongly decentralizing effect by taming the relative influence China's power-cost edge has over other countries' connectivity edge. – ForkiusMaximus
https://np.reddit.com/btc/comments/3ybl8it_may_well_be_that_small_blocks_are_what_is/
[4]
Blockchain Neutrality: "No-one should give a shit if the NSA, big businesses or the Chinese govt is running a node where most backyard nodes can no longer keep up. As long as the NSA and China DON'T TRUST EACH OTHER, then their nodes are just as good as nodes run in a basement" - ferretinjapan
https://np.reddit.com/btc/comments/3uwebe/blockchain_neutrality_noone_should_give_a_shit_if/
submitted by ydtm to btc [link] [comments]

Technical discussion of Gavin's O(1) block propagation proposal

I think there isn't wide appreciation of how important Gavin's proposal is for the scalability of Bitcoin. It's the real deal, and will get us out of this sort of beta mode we've been in of a few transactions per second globally. I spent a few hours reviewing the papers referenced at the bottom of his excellent write-up and think I get it now.
If you already get it, then hang around and answer questions from me and others. If you don't get it yet, start by very carefully reading https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2.
The big idea is twofold: fix the miner's incentives to align better with users wanting transactions to clear, and eliminate the sending of redundant data in the newblock message when a block is solved to save bandwidth.
I'll use (arbitrarily) a goal of 1 million tx per block, which is just over 1000 TPS. This seems pretty achievable, without a lot of uncertainty. Really! Read on.
Today, a miner really wants to propagate a solved block as soon as possible to not jeopardize their 25 BTC reward. It's not the cpu cost for handling the transactions on the miner's side that's the problem, it's the sending of a larger newblock message around the network that just might cause her block to lose a race condition with another solution to the block.
So aside from transactions with fees of more than 0.0008 BTC that can make up for this penalty (https://gist.github.com/gavinandresen/5044482), or simply the goodwill of benevolent pools to process transactions, there is today an incentive for miners not to include transactions in a block. The problem is BTC price has grown so high so fast that 0.0008 BTC is about 50 cents, which is high for day-to-day transactions (and very high for third world transactions).
The whole idea centers around an old observation that since the network nodes (including miners) have already received transactions by the normal second-by-second operation of the p2p network, the newblock announcement message shouldn't have to repeat the transaction details. Instead, it can just tell people, hey, I approve these particular transactions called XYZ, and you can check me by taking your copy of those same transactions that you already have and running the hash to check that my header is correctly solved. Proof of work.
A basic way to do this would be to send around a Bloom filter in the newblock message. A receiving node would check all the messages they have, see which of them are in this solved block, and mark them out of their temporary memory pool. Using a BF calculator you can see that you need about 2MB in order to get an error rate of 10e-6 for 1 million entries. 2MB gives 16 million bits which is enough to almost always be able to tell if a tx that you know about is in the block or not.
There are two problems with this: there may be transactions in the solved block that you don't have, for whatever p2p network or policy reason. The BF can't tell you what those are. It can just tell you there were e.g. 1,000,000 tx in this solved block and you were able to find only 999,999 of them. The other glitch is that of those 999,999 it told you were there, a couple could be false positives. I think there are ways you could try to deal with this--send more types of request messages around the network to fill in your holes--but I'll dismiss this and flip back to Gavin's IBLT instead.
The IBLT works super well to mash a huge number of transactions together into one fixed-size (O(1)) data structure, to compare against another set of transactions that is really close, with just a few differences. The "few differences" part compared to the size of the IBLT is critical to this whole thing working. With too many differences, the decode just fails and the receiver wouldn't be able to understand this solved block.
Gavin suggests key size of 8B and data of 8B chunks. I don't understand his data size--there's a big key checksum you need in order to do full add and subtract of IBLTs (let's say 8B, although this might have to be 16B?) that I would rather amortize over more granular data chunks. The average tx is 250B anyway. So I'm going to discuss an 8B key and 64B data chunks. With a count field, this then gives 8 key + 64 data + 16 checksum + 4 count = 92B. Let's round to 100B per IBLT cell.
Let's say we want to fix our newblock message size to around 1MB, in order to not be too alarming for the change to this scheme from our existing 1MB block limit (that miners don't often fill anyway). This means we can have an IBLT with m=10K, or 10,000 cells, which with the 1.5d rule (see the papers) means we can tolerate about 6000 differences in cells, which because we are slicing transactions into multiple cells (4 on average), means we can handle about 1500 differences in transactions at the receiver vs the solver and have faith that we can decode the newblock message fully almost all the time (has to be some way to handle the occasional node that fails this and has to catch up).
So now the problem becomes, how can we define some conventions so that the different nodes can mostly agree on which of the transactions flying around the network for the past N (~10) minutes should be included in the solved block. If the solver gets it wrong, her block doesn't get accepted by the rest of the network. Strong incentive! If the receiver gets it wrong (although she can try multiple times with different sets), she can't track the rest of the network's progress.
This is the genius part around this proposal. If we define the convention so that the set of transactions to be included in a block is essentially all of them, then the miners are strongly incentivized, not just by tx fees, but by the block reward itself to include all those transactions that happened since the last block. It still allows them to make their own decisions, up to 1500 tx could be added where convention would say not to, or not put in where convention says to. This preserves the notion of tx-approval freedom in the network for miners, and some later miner will probably pick up those straggler tx.
I think it might be important to provide as many guidelines for the solver as possible to describe what is in her block, in specific terms as possible without actually having to give tx ids, so that the receivers in their attempt to decode this block can build up as similar an IBLT on their side using the same rules. Something like the tx fee range, some framing of what tx are in the early part and what tx are near the end (time range I mean). Side note: I guess if you allow a tx fee range in this set of parameters, then the solver could put it real high and send an empty block after all, which works against the incentive I mentioned above, so maybe that particular specification is not beneficial.
From http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf for example, the propagation delay is about 30-40 seconds before almost all nodes have received any particular transaction, so it may be useful for the solver to include tx only up to a certain point in time, like 30 seconds ago. Any tx that is younger than this just waits until the next block, so it's not a big penalty. But some policy like this (and some way to communicate it in the absence of centralized time management among the nodes) will be important to keep the number of differences in the two sets small, below 1500 in my example. The receiver of the newblock message would know when trying to decode it, that they should build up an IBLT on their side also with tx only from up to 30 seconds ago.
I don't understand Gavin's requirement for canonical ordering. I see that it doesn't hurt, but I don't see the requirement for it. Can somebody elaborate? It seems that's his way to achieve the same framing that I am talking about in the previous paragraph, to obtain a minimum number of differences in the two sets. There is no need to clip the total number of tx in a block that I see, since you can keep shoving into the IBLT as much as you want, as long as the number of differences is bounded. So I don't see a canonical ordering being required for clipping the tx set. The XOR (or add-subtract) behavior of the IBLT doesn't require any ordering in the sets that I see, it's totally commutative. Maybe it's his way of allowing miners some control over what tx they approve, how many tx into this canonical order they want to get. But that would also allow them to send around solved empty blocks.
What is pretty neat about this from a consumer perspective is the tx fees could be driven real low, like down to the network propagation minimum which I think as of this spring per Mike Hearn is now 0.00001 BTC or 10 "bits" (1000 satoshis), half a US cent. Maybe that's a problem--the miners get the shaft without being able to bid on which transactions they approve. If they try to not approve too many tx their block won't be decoded by the rest of the network like all the non-mining nodes running the bitpay/coinbases of the world.
Edit: 10 bits is 1000 satoshis, not 10k satoshis
submitted by sandball to Bitcoin [link] [comments]

Flux: Revisiting Near Blocks for Proof-of-Work Blockchains

Cryptology ePrint Archive: Report 2018/415
Date: 2018-05-29
Author(s): Alexei Zamyatin∗, Nicholas Stifter, Philipp Schindler, Edgar Weippl, William J. Knottenbelt∗

Link to Paper


Abstract
The term near or weak blocks describes Bitcoin blocks whose PoW does not meet the required target difficulty to be considered valid under the regular consensus rules of the protocol. Near blocks are generally associated with protocol improvement proposals striving towards shorter transaction confirmation times. Existing proposals assume miners will act rationally based solely on intrinsic incentives arising from the adoption of these changes, such as earlier detection of blockchain forks.
In this paper we present Flux, a protocol extension for proof-of-work blockchains that leverages on near blocks, a new block reward distribution mechanism, and an improved branch selection policy to incentivize honest participation of miners. Our protocol reduces mining variance, improves the responsiveness of the underlying blockchain in terms of transaction processing, and can be deployed without conflicting modifications to the underlying base protocol as a velvet fork. We perform an initial analysis of selfish mining which suggests Flux not only provides security guarantees similar to pure Nakamoto consensus, but potentially renders selfish mining strategies less profitable.

References
[1] Bitcoin Cash. https://www.bitcoincash.org/. Accessed: 2017-01-24.
[2] P2pool. http://p2pool.org/. Accessed: 2017-05-10.
[3] G. Andersen. Comment in ”faster blocks vs bigger blocks”. https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481, 2014. Accessed: 2017-05-10.
[4] G. Andersen. [bitcoin-dev] weak block thoughts... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html, 2015. Accessed: 2017-05-10.
[5] E. Androulaki, S. Capkun, and G. O. Karame. Two bitcoins at the price of one? double-spending attacks on fast payments in bitcoin. In CCS, 2012.
[6] J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Bohme. ¨ Can we afford integrity by proof-of-work? scenarios inspired by the bitcoin currency. In WEIS. Springer, 2012.
[7] I. Bentov, R. Pass, and E. Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2016-11-08.
[8] Bitcoin community. OP RETURN. https://en.bitcoin.it/wiki/OP\RETURN. Accessed: 2017-05-10.
[9] Bitcoin Wiki. Merged mining specification. [https://en.bitcoin.it/wiki/Merged\](https://en.bitcoin.it/wiki/Merged)) mining\ specification. Accessed: 2017-05-10.
[10] Blockchain.info. Hashrate Distribution in Bitcoin. https://blockchain.info/de/pools. Accessed: 2017-05-10.
[11] Blockchain.info. Unconfirmed bitcoin transactions. https://blockchain.info/unconfirmed-transactions. Accessed: 2017-05-10.
[12] J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, and E. W. Felten. Sok: Research perspectives and challenges for bitcoin and cryptocurrencies. In IEEE Symposium on Security and Privacy, 2015.
[13] V. Buterin. Ethereum: A next-generation smart contract and decentralized application platform. https://github.com/ethereum/wiki/wiki/White-Paper, 2014. Accessed: 2016-08-22.
[14] C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[15] J. R. Douceur. The sybil attack. In International Workshop on Peer-toPeer Systems, pages 251–260. Springer, 2002.
[16] I. Eyal, A. E. Gencer, E. G. Sirer, and R. Renesse. Bitcoin-ng: A scalable blockchain protocol. In 13th USENIX Security Symposium on Networked Systems Design and Implementation (NSDI’16). USENIX Association, Mar 2016.
[17] I. Eyal and E. G. Sirer. Majority is not enough: Bitcoin mining is vulnerable. In Financial Cryptography and Data Security, pages 436–454. Springer, 2014.
[18] J. Garay, A. Kiayias, and N. Leonardos. The bitcoin backbone protocol: Analysis and applications. In Advances in Cryptology-EUROCRYPT 2015, pages 281–310. Springer, 2015.
[19] A. E. Gencer, S. Basu, I. Eyal, R. Renesse, and E. G. Sirer. Decentralization in bitcoin and ethereum networks. In Proceedings of the 22nd International Conference on Financial Cryptography and Data Security (FC). Springer, 2018.
[20] A. Gervais, G. Karame, S. Capkun, and V. Capkun. Is bitcoin a decentralized currency? volume 12, pages 54–60, 2014.
[21] A. Gervais, G. O. Karame, K. Wust, V. Glykantzis, H. Ritzdorf, ¨ and S. Capkun. On the security and performance of proof of work blockchains. https://eprint.iacr.org/2016/555.pdf, 2016. Accessed: 2016-08-10.
[22] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols. In Secure Information Networks, pages 258–272. Springer, 1999.
[23] A. Judmayer, A. Zamyatin, N. Stifter, A. G. Voyiatzis, and E. Weippl. Merged mining: Curse or cure? In CBT’17: Proceedings of the International Workshop on Cryptocurrencies and Blockchain Technology, Sep 2017.
[24] G. O. Karame, E. Androulaki, M. Roeschlin, A. Gervais, and S. Capkun. ˇ Misbehavior in bitcoin: A study of double-spending and accountability. volume 18, page 2. ACM, 2015.
[25] A. Kiayias, A. Miller, and D. Zindros. Non-interactive proofs of proof-of-work. Cryptology ePrint Archive, Report 2017/963, 2017. Accessed:2017-10-03.
[26] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
[27] Y. Lewenberg, Y. Sompolinsky, and A. Zohar. Inclusive block chain protocols. In Financial Cryptography and Data Security, pages 528–547. Springer, 2015.
[28] Litecoin community. Litecoin reference implementation. https://github.com/litecoin-project/litecoin. Accessed: 2018-05-03.
[29] G. Maxwell. Comment in ”[bitcoin-dev] weak block thoughts...”. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011198.html, 2016. Accessed: 2017-05-10.
[30] S. Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[31] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2015-07-01.
[32] Namecoin community. Namecoin reference implementation. https://github.com/namecoin/namecoin. Accessed: 2017-05-10.
[33] Narayanan, Arvind and Bonneau, Joseph and Felten, Edward and Miller, Andrew and Goldfeder, Steven. Bitcoin and cryptocurrency technologies. https://d28rh4a8wq0iu5.cloudfront.net/bitcointech/readings/princeton bitcoin book.pdf?a=1, 2016. Accessed: 2016-03-29.
[34] K. Nayak, S. Kumar, A. Miller, and E. Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[35] K. J. O’Dwyer and D. Malone. Bitcoin mining and its energy footprint. 2014.
[36] R. Pass and E. Shi. Fruitchains: A fair blockchain. http://eprint.iacr.org/2016/916.pdf, 2016. Accessed: 2016-11-08.
[37] C. Perez-Sol ´ a, S. Delgado-Segura, G. Navarro-Arribas, and J. Herrera- ` Joancomart´ı. Double-spending prevention for bitcoin zero-confirmation transactions. http://eprint.iacr.org/2017/394, 2017. Accessed: 2017-06-
[38] Pseudonymous(”TierNolan”). Decoupling transactions and pow. https://bitcointalk.org/index.php?topic=179598.0, 2013. Accessed: 2017-05-10.
[39] P. R. Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
[40] K. Rosenbaum. Weak blocks - the good and the bad. http://popeller.io/ index.php/2016/01/19/weak-blocks-the-good-and-the-bad/, 2016. Accessed: 2017-05-10.
[41] K. Rosenbaum and R. Russell. Iblt and weak block propagation performance. Scaling Bitcoin Hong Kong (6 December 2015), 2015.
[42] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[43] R. Russel. Weak block simulator for bitcoin. https://github.com/rustyrussell/weak-blocks, 2014. Accessed: 2017-05-10.
[44] A. Sapirshtein, Y. Sompolinsky, and A. Zohar. Optimal selfish mining strategies in bitcoin. http://arxiv.org/pdf/1507.06183.pdf, 2015. Accessed: 2016-08-22.
[45] E. B. Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and M. Virza. Zerocash: Decentralized anonymous payments from bitcoin. In Security and Privacy (SP), 2014 IEEE Symposium on, pages 459–474. IEEE, 2014.
[46] Satoshi Nakamoto. Comment in ”bitdns and generalizing bitcoin” bitcointalk thread. https://bitcointalk.org/index.php?topic=1790.msg28696#msg28696. Accessed: 2017-06-05.
[47] Y. Sompolinsky, Y. Lewenberg, and A. Zohar. Spectre: A fast and scalable cryptocurrency protocol. Cryptology ePrint Archive, Report 2016/1159, 2016. Accessed: 2017-02-20.
[48] Y. Sompolinsky and A. Zohar. Secure high-rate transaction processing in bitcoin. In Financial Cryptography and Data Security, pages 507–527. Springer, 2015.
[49] Suhas Daftuar. Bitcoin merge commit: ”mining: Select transactions using feerate-with-ancestors”. https://github.com/bitcoin/bitcoin/pull/7600. Accessed: 2017-05-10.
[50] M. B. Taylor. Bitcoin and the age of bespoke silicon. In Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems, page 16. IEEE Press, 2013.
[51] F. Tschorsch and B. Scheuermann. Bitcoin and beyond: A technical survey on decentralized digital currencies. In IEEE Communications Surveys Tutorials, volume PP, pages 1–1, 2016.
[52] P. J. Van Laarhoven and E. H. Aarts. Simulated annealing. In Simulated annealing: Theory and applications, pages 7–15. Springer, 1987.
[53] A. Zamyatin, N. Stifter, A. Judmayer, P. Schindler, E. Weippl, and W. J. Knottebelt. (Short Paper) A Wild Velvet Fork Appears! Inclusive Blockchain Protocol Changes in Practice. In 5th Workshop on Bitcoin and Blockchain Research, Financial Cryptography and Data Security 18 (FC). Springer, 2018.
[54] F. Zhang, I. Eyal, R. Escriva, A. Juels, and R. Renesse. Rem: Resourceefficient mining for blockchains. http://eprint.iacr.org/2017/179, 2017. Accessed: 2017-03-24.
submitted by dj-gutz to myrXiv [link] [comments]

The Origins of the (Modern) Blocksize Debate

On May 4, 2015, Gavin Andresen wrote on his blog:
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
On May 5 (1 day after Gavin submitted his first blog post), Mike Hearn published The capacity cliff on his Medium page. 2 days later, he posted Crash landing. In these posts, he argued:
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature[1][2]:
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.
Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. [1][2][3][4][5][6][7]
Matt Corallo brought Gavin's proposal up on the bitcoin-dev mailing list after a few days. He wrote:
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
Shortly thereafter, Corallo explained further:
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.
Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
Peter Todd mentioned that a limited blocksize provides the benefit of protecting against the "perverse incentives" behind potential block withholding attacks.
Slush didn't have a strong opinion one way or the other, and neither did Eric Lombrozo, though Eric was interested in developing hard-fork best practices and wanted to:
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.
Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.
Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
Gregory Maxwell echoed and extended that perspective:
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
Peter Todd also summarized some academic findings on the subject:
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Pieter Wuille said:
I am - in general - in favor of increasing the size blocks...
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
Choose wisely.
Mike Hearn responded:
this list is not a good place for making progress or reaching decisions.
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
Jorge Timón wrote an incredibly prescient reply to Mike:
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Mike Hearn's response was to demand a "leader" who could unilaterally steer the Bitcoin project and make decisions unchecked:
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadership
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
Jorge Timón responded:
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
Mike Hearn again asserted the need for a leader:
There must be a single decision maker for any given codebase.
Bryan Bishop attempted to explain why this did not make sense with git architecture.
Finally, Gavin announced his intent to merge the patch into Bitcoin XT to bypass the peer review he had received on the bitcoin-dev mailing list.
submitted by sound8bits to sound8bits [link] [comments]

Merged Mining: Analysis of Effects and Implications

Date: 2017-08-24
Author(s): Alexei Zamyatin, Edgar Weippl

Link to Paper


Abstract
Merged mining refers to the concept of mining more than one cryptocurrency without necessitating additional proof-of-work effort. Merged mining was introduced in 2011 as a boostrapping mechanism for new cryptocurrencies and countermeasures against the fragmentation of mining power across competing systems. Although merged mining has already been adopted by a number of cryptocurrencies, to this date little is known about the effects and implications.
In this thesis, we shed light on this topic area by performing a comprehensive analysis of merged mining in practice. As part of this analysis, we present a block attribution scheme for mining pools to assist in the evaluation of mining centralization. Our findings disclose that mining pools in merge-mined cryptocurrencies have operated at the edge of, and even beyond, the security guarantees offered by the underlying Nakamoto consensus for extended periods. We discuss the implications and security considerations for these cryptocurrencies and the mining ecosystem as a whole, and link our findings to the intended effects of merged mining.

Bibliography
[1] Coinmarketcap. http://coinmarketcap.com/. Accessed 2017-09-28.
[2] P2pool. http://p2pool.org/. Accessed: 2017-05-10.
[3] M. Ali, J. Nelson, R. Shea, and M. J. Freedman. Blockstack: Design and implementation of a global naming system with blockchains. http://www.the-blockchain.com/docs/BlockstackDesignandImplementationofaGlobalNamingSystem.pdf, 2016. Accessed: 2016-03-29.
[4] G. Andersen. Comment in "faster blocks vs bigger blocks". https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481, 2014. Accessed: 2017-05-10.
[5] G. Andersen. [bitcoin-dev] weak block thoughts... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html, 2015. Accessed: 2017-05-10.
[6] L. Anderson, R. Holz, A. Ponomarev, P. Rimba, and I. Weber. New kids on the block: an analysis of modern blockchains. http://arxiv.org/pdf/1606.06530.pdf, 2016. Accessed: 2016-07-04.
[7] E. Androulaki, S. Capkun, and G. O. Karame. Two bitcoins at the price of one? double-spending attacks on fast payments in bitcoin. In CCS, 2012.
[8] A. Back, M. Corallo, L. Dashjr, M. Friedenbach, G. Maxwell, A. Miller, A. Poelstra, J. Timón, and P. Wuille. Enabling blockchain innovations with pegged sidechains. http://newspaper23.com/ripped/2014/11/http-_____-___-_www___-blockstream___-com__-_sidechains.pdf, 2014. Accessed: 2017-09-28.
[9] A. Back et al. Hashcash - a denial of service counter-measure. http://www.hashcash.org/papers/hashcash.pdf, 2002. Accessed: 2017-09-28.
[10] S. Barber, X. Boyen, E. Shi, and E. Uzun. Bitter to better - how to make bitcoin a better currency. In Financial cryptography and data security, pages 399–414. Springer, 2012.
[11] J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Böhme. Can we afford integrity by proof-of-work? scenarios inspired by the bitcoin currency. In WEIS. Springer, 2012.
[12] I. Bentov, R. Pass, and E. Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2017-09-28.
[13] Bitcoin Community. Bitcoin developer guide- transaction data. https://bitcoin.org/en/developer-guide#term-merkle-tree. Accessed: 2017-06-05.
[14] Bitcoin Community. Bitcoin protocol documentation - merkle trees. https://en.bitcoin.it/wiki/Protocol_documentation#Merkle_Trees. Accessed: 2017-06-05.
[15] Bitcoin community. Bitcoin protocol rules. https://en.bitcoin.it/wiki/Protocol_rules. Accessed: 2017-08-22.
[16] V. Buterin. Chain interoperability. Technical report, Tech. rep. 1. R3CEV, 2016.
[17] W. Dai. bmoney. http://www.weidai.com/bmoney.txt, 1998. Accessed: 2017-09-28.
[18] C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[19] C. Decker and R. Wattenhofer. Bitcoin transaction malleability and mtgox. In Computer Security-ESORICS 2014, pages 313–326. Springer, 2014.
[20] Dogecoin community. Dogecoin reference implementation. https://github.com/dogecoin/
[27] A. Gervais, G. Karame, S. Capkun, and V. Capkun. Is bitcoin a decentralized currency? volume 12, pages 54–60, 2014.
[28] A. Gervais, G. O. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun. On the security and performance of proof of work blockchains. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 3–16. ACM, 2016.
[29] I. Giechaskiel, C. Cremers, and K. B. Rasmussen. On bitcoin security in the presence of broken cryptographic primitives. In European Symposium on Research in Computer Security (ESORICS), September 2016.
[30] J. Göbel, H. P. Keeler, A. E. Krzesinski, and P. G. Taylor. Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of propagation delay. Performance Evaluation, 104:23–41, 2016.
[31] E. Heilman, A. Kendler, A. Zohar, and S. Goldberg. Eclipse attacks on bitcoin’s peer-to-peer network. In 24th USENIX Security Symposium (USENIX Security 15), pages 129–144, 2015.
[32] Huntercoin developers. Huntercoin reference implementation. https://github.com/chronokings/huntercoin. Accessed: 2017-06-05.
[33] B. Jakobsson and A. Juels. Proofs of work and bread pudding protocols, Apr. 8 2008. US Patent 7,356,696; Accessed: 2017-06-05.
[34] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols. In Secure Information Networks, pages 258–272. Springer, 1999.
[35] A. Judmayer, N. Stifter, K. Krombholz, and E. Weippl. Blocks and chains: Introduction to bitcoin, cryptocurrencies, and their consensus mechanisms. Synthesis Lectures on Information Security, Privacy, & Trust, 9(1):1–123, 2017.
[36] A. Juels and J. G. Brainard. Client puzzles: A cryptographic countermeasure against connection depletion attacks. In NDSS, volume 99, pages 151–165, 1999.
[37] A. Juels and B. S. Kaliski Jr. Pors: Proofs of retrievability for large files. In Proceedings of the 14th ACM conference on Computer and communications security, pages 584–597. Acm, 2007.
[38] H. Kalodner, M. Carlsten, P. Ellenbogen, J. Bonneau, and A. Narayanan. An empirical study of namecoin and lessons for decentralized namespace design. In WEIS, 2015.
[39] G. O. Karame, E. Androulaki, and S. Capkun. Double-spending fast payments in bitcoin. In Proceedings of the 2012 ACM conference on Computer and communications security, pages 906–917. ACM, 2012.
[40] G. O. Karame, E. Androulaki, M. Roeschlin, A. Gervais, and S. Čapkun. Misbehavior in bitcoin: A study of double-spending and accountability. volume 18, page 2. ACM, 2015.
[41] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
[42] S. King. Primecoin: Cryptocurrency with prime number proof-of-work. July 7th, 2013.
[43] T. Kluyver, B. Ragan-Kelley, F. Pérez, B. E. Granger, M. Bussonnier, J. Frederic, K. Kelley, J. B. Hamrick, J. Grout, S. Corlay, et al. Jupyter notebooks-a publishing format for reproducible computational workflows. In ELPUB, pages 87–90, 2016.
[44] Lerner, Sergio D. Rootstock plattform. http://www.the-blockchain.com/docs/Rootstock-WhitePaper-Overview.pdf. Accessed: 2017-06-05.
[45] Y. Lewenberg, Y. Bachrach, Y. Sompolinsky, A. Zohar, and J. S. Rosenschein. Bitcoin mining pools: A cooperative game theoretic analysis. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 919–927. International Foundation for Autonomous Agents and Multiagent Systems, 2015.
[46] Litecoin community. Litecoin reference implementation. https://github.com/litecoin-project/litecoin. Accessed: 2017-09-28.
[47] I. Maven. Apache maven project, 2011.
[48] G. Maxwell. Comment in "[bitcoin-dev] weak block thoughts...". https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011198.html, 2016. Accessed: 2017-05-10.
[49] S. Meiklejohn, M. Pomarole, G. Jordan, K. Levchenko, D. McCoy, G. M. Voelker, and S. Savage. A fistful of bitcoins: characterizing payments among men with no names. In Proceedings of the 2013 conference on Internet measurement conference, pages 127–140. ACM, 2013.
[50] S. Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[51] A. Miller, A. Juels, E. Shi, B. Parno, and J. Katz. Permacoin: Repurposing bitcoin work for data preservation. In Security and Privacy (SP), 2014 IEEE Symposium on, pages 475–490. IEEE, 2014.
[52] A. Miller, A. Kosba, J. Katz, and E. Shi. Nonoutsourceable scratch-off puzzles to discourage bitcoin mining coalitions. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 680–691. ACM, 2015.
[53] B. Momjian. PostgreSQL: introduction and concepts, volume 192. Addison-Wesley New York, 2001.
[54] Myriad core developers. Myriadcoin reference implementation. https://github.com/myriadcoin/myriadcoin. Accessed: 2017-06-05.
[55] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2017-09-28.
[56] S. Nakamoto. Merged mining specification. https://en.bitcoin.it/wiki/Merged_mining_specification, Apr 2011. Accessed: 2017-09-28.
[57] Namecoin Community. Merged mining. https://github.com/namecoin/wiki/blob/masteMerged-Mining.mediawiki#Goal_of_this_namecoin_change. Accessed: 2017-08-20.
[58] Namecoin community. Namecoin reference implementation. https://github.com/namecoin/namecoin. Accessed: 2017-09-28.
[59] A. Narayanan, J. Bonneau, E. Felten, A. Miller, and S. Goldfeder. Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Princeton University Press, 2016.
[60] K. Nayak, S. Kumar, A. Miller, and E. Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[61] K. J. O’Dwyer and D. Malone. Bitcoin mining and its energy footprint. 2014.
[62] R. Pass, L. Seeman, and A. Shelat. Analysis of the blockchain protocol in asynchronous networks. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 643–673. Springer, 2017.
[63] D. Pointcheval and J. Stern. Security arguments for digital signatures and blind signatures. Journal of cryptology, 13(3):361–396, 2000.
[64] Pseudonymous("TierNolan"). Decoupling transactions and pow. https://bitcointalk.org/index.php?topic=179598.0, 2013. Accessed: 2017-05-10.
[65] P. R. Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
[66] K. Rosenbaum. Weak blocks - the good and the bad. http://popeller.io/index.php/2016/01/19/weak-blocks-the-good-and-the-bad/, 2016. Accessed: 2017-05-10.
[67] K. Rosenbaum and R. Russell. Iblt and weak block propagation performance. Scaling Bitcoin Hong Kong (6 December 2015), 2015.
[68] M. Rosenfeld. Analysis of bitcoin pooled mining reward systems. arXiv preprint arXiv:1112.4980, 2011.
[69] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[70] R. Russel. Weak block simulator for bitcoin. https://github.com/rustyrussell/weak-blocks, 2014. Accessed: 2017-05-10.
[71] A. Sapirshtein, Y. Sompolinsky, and A. Zohar. Optimal selfish mining strategies in bitcoin. In International Conference on Financial Cryptography and Data Security, pages 515–532. Springer, 2016.
[72] Sathoshi Nakamoto. Comment in "bitdns and generalizing bitcoin" bitcointalk thread. https://bitcointalk.org/index.php?topic=1790.msg28696#msg28696. Accessed: 2017-06-05.
[73] O. Schrijvers, J. Bonneau, D. Boneh, and T. Roughgarden. Incentive compatibility of bitcoin mining pool reward functions. In FC ’16: Proceedings of the the 20th International Conference on Financial Cryptography, February 2016.
[74] B. Sengupta, S. Bag, S. Ruj, and K. Sakurai. Retricoin: Bitcoin based on compact proofs of retrievability. In Proceedings of the 17th International Conference on Distributed Computing and Networking, page 14. ACM, 2016.
[75] N. Szabo. Bit gold. http://unenumerated.blogspot.co.at/2005/12/bit-gold.html, 2005. Accessed: 2017-09-28.
[76] M. B. Taylor. Bitcoin and the age of bespoke silicon. In Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems, page 16. IEEE Press, 2013.
[77] Unitus developers. Unitus reference implementation. https://github.com/unitusdev/unitus. Accessed: 2017-08-22.
[78] M. Vukolić. The quest for scalable blockchain fabric: Proof-of-work vs. bft replication. In International Workshop on Open Problems in Network Security, pages 112–125. Springer, 2015.
[79] P. Webb, D. Syer, J. Long, S. Nicoll, R. Winch, A. Wilkinson, M. Overdijk, C. Dupuis, and S. Deleuze. Spring boot reference guide. Technical report, 2013-2016.
[80] A. Zamyatin. Name-squatting in namecoin. (unpublished BSc thesis, Vienna University of Technology), 2015.
submitted by dj-gutz to myrXiv [link] [comments]

The Mike Hearn Show: Season Finale (and Bitcoin Classic: Series Premiere)

This post debunks Mike Hearn's conspiracy theories RE Blockstream in his farewell post and points out issues with the behavior of the Bitcoin Classic hard fork and sketchy tactics of its advocates
I used to be torn on how to judge Mike Hearn. On the one hand he has done some good work with BitcoinJ, Lighthouse etc. Certainly his choice of bloom filter has had a net negative effect on the privacy of SPV users, but all in all it works as advertised.* On the other hand, he has single handedly advocated for some of the most alarming behavior changes in the Bitcoin network (e.g. redlists, coinbase reallocation, BIP101 etc...) to date. Not to mention his advocacy in the past year has degraded from any semblance of professionalism into an adversarial us-vs-them propaganda train. I do not believe his long history with the Bitcoin community justifies this adversarial attitude.
As a side note, this post should not be taken as unabated support for Bitcoin Core. Certainly the dev team is made of humans and like all humans mistakes can be made (e.g. March 2013 fork). Some have even engaged in arguably unprofessional behavior but I have not yet witnessed any explicitly malicious activity from their camp (q). If evidence to the contrary can be provided, please share it. Thankfully the development of Bitcoin Core happens more or less completely out in the open; anyone can audit and monitor the goings on. I personally check the repo at least once a day to see what work is being done. I believe that the regular committers are genuinely interested in the overall well being of the Bitcoin network and work towards the common goal of maintaining and improving Core and do their best to juggle the competing interests of the community that depends on them. That is not to say that they are The Only Ones; for the time being they have stepped up to the plate to do the heavy lifting. Until that changes in some way they have my support.
The hard line that some of the developers have drawn in regards to the block size has caused a serious rift and this write up is a direct response to oft-repeated accusations made by Mike Hearn and his supporters about members of the core development team. I have no affiliations or connection with Blockstream, however I have met a handful of the core developers, both affiliated and unaffiliated with Blockstream.
Mike opens his farewell address with his pedigree to prove his opinion's worth. He masterfully washes over the mountain of work put into improving Bitcoin Core over the years by the "small blockians" to paint the picture that Blockstream is stonewalling the development of Bitcoin. The folks who signed Greg's scalability road map have done some of the most important, unsung work in Bitcoin. Performance improvements, privacy enhancements, increased reliability, better sync times, mempool management, bandwidth reductions etc... all those things are thanks to the core devs and the research community (e.g. Christian Decker), many of which will lead to a smoother transition to larger blocks (e.g. libsecp256k1).(1) While ignoring previous work and harping on the block size exclusively, Mike accuses those same people who have spent countless hours working on the protocol of trying to turn Bitcoin into something useless because they remain conservative on a highly contentious issue that has tangible effects on network topology.
The nature of this accusation is characteristic of Mike's attitude over the past year which marked a shift in the block size debate from a technical argument to a personal one (in tandem with DDoS and censorship in /Bitcoin and general toxicity from both sides). For example, Mike claimed that sidechains constitutes a conflict of interest, as Blockstream employees are "strongly incentivized to ensure [bitcoin] works poorly and never improves" despite thousands of commits to the contrary. Many of these commits are top down rewrites of low level Bitcoin functionality, not chump change by any means. I am not just "counting commits" here. Anyways, Blockstream's current client base consists of Bitcoin exchanges whose future hinges on the widespread adoption of Bitcoin. The more people that use Bitcoin the more demand there will be for sidechains to service the Bitcoin economy. Additionally, one could argue that if there was some sidechain that gained significant popularity (hundreds of thousands of users), larger blocks would be necessary to handle users depositing and withdrawing funds into/from the sidechain. Perhaps if they were miners and core devs at the same time then a conflict of interest on small blocks would be a more substantive accusation (create artificial scarcity to increase tx fees). The rational behind pricing out the Bitcoin "base" via capacity constraint to increase their business prospects as a sidechain consultancy is contrived and illogical. If you believe otherwise I implore you to share a detailed scenario in your reply so I can see if I am missing something.
Okay, so back to it. Mike made the right move when Core would not change its position, he forked Core and gave the community XT. The choice was there, most miners took a pass. Clearly there was not consensus on Mike's proposed scaling road map or how big blocks should be rolled out. And even though XT was a failure (mainly because of massive untested capacity increases which were opposed by some of the larger pools whose support was required to activate the 75% fork), it has inspired a wave of implementation competition. It should be noted that the censorship and attacks by members of /Bitcoin is completely unacceptable, there is no excuse for such behavior. While theymos is entitled to run his subreddit as he sees fit, if he continues to alienate users there may be a point of mass exodus following some significant event in the community that he tries to censor. As for the DDoS attackers, they should be ashamed of themselves; it is recommended that alt. nodes mask their user agents.
Although Mike has left the building, his alarmist mindset on the block size debate lives on through Bitcoin Classic, an implementation which is using a more subtle approach to inspire adoption, as jtoomim cozies up with miners to get their support while appealing to the masses with a call for an adherence to Satoshi's "original vision for Bitcoin." That said, it is not clear that he is competent enough to lead the charge on the maintenance/improvement of the Bitcoin protocol. That leaves most of the heavy lifting up to Gavin, as Jeff has historically done very little actual work for Core. We are thus in a potentially more precarious situation then when we were with XT, as some Chinese miners are apparently "on board" for a hard fork block size increase. Jtoomim has expressed a willingness to accept an exceptionally low (60 or 66%) consensus threshold to activate the hard fork if necessary. Why? Because of the lost "opportunity cost" of the threshold not being reached.(c) With variance my guess is that a lucky 55% could activate that 60% threshold. That's basically two Chinese miners. I don't mean to attack him personally, he is just willing to go down a path that requires the support of only two major Chinese mining pools to activate his hard fork. As a side effect of the latency issues of GFW, a block size increase might increase orphan rate outside of GFW, profiting the Chinese pools. With a 60% threshold there is no way for miners outside of China to block that hard fork.
To compound the popularity of this implementation, the efforts of Mike, Gavin and Jeff have further blinded many within the community to the mountain of effort that core devs have put in. And it seems to be working, as they are beginning to successfully ostracize the core devs beyond the network of "true big block-believers." It appears that Chinese miners are getting tired of the debate (and with it Core) and may shift to another implementation over the issue.(d) Some are going around to mining pools and trying to undermine Core's position in the soft vs. hard fork debate. These private appeals to the miner community are a concern because there is no way to know if bad information is being passed on with the intent to disrupt Core's consensus based approach to development in favor of an alternative implementation controlled (i.e. benevolent dictator) by those appealing directly to miners. If the core team is reading this, you need to get out there and start pushing your agenda so the community has a better understanding of what you all do every day and how important the work is. Get some fancy videos up to show the effects of block size increase and work on reading materials that are easy for non technically minded folk to identify with and get behind.
The soft fork debate really highlights the disingenuity of some of these actors. Generally speaking, soft forks are easier on network participants who do not regularly keep up with the network's software updates or have forked the code for personal use and are unable to upgrade in time, while hard forks require timely software upgrades if the user hopes to maintain consensus after a hardfork. The merits of that argument come with heavy debate. However, more concerning is the fact that hard forks require central planning and arguably increase the power developers have over changes to the protocol.(2) In contrast, the 'signal of readiness' behavior of soft forks allows the network to update without any hardcoded flags and developer oversight. Issues with hard forks are further compounded by activation thresholds, as soft forks generally require 95% consensus while Bitcoin Classic only calls for 60-75% consensus, exposing network users to a greater risk of competing chains after the fork. Mike didn't want to give the Chinese any more power, but now the post XT fallout has pushed the Chinese miners right into the Bitcoin Classic drivers seat.
While a net split did happen briefly during the BIP66 soft fork, imagine that scenario amplified by miners who do not agree to hard fork changes while controlling 25-40% of the networks hashing power. Two actively mined chains with competing interests, the Doomsday Scenario. With a 5% miner hold out on a soft fork, the fork will constantly reorg and malicious transactions will rarely have more than one or two confirmations.(b) During a soft fork, nodes can protect themselves from double spends by waiting for extra confirmations when the node alerts the user that a ANYONECANSPEND transaction has been seen. Thus, soft forks give Bitcoin users more control over their software (they can choose to treat a softfork as a soft fork or a soft fork as a hardfork) which allows for greater flexibility on upgrade plans for those actively maintaining nodes and other network critical software. (2) Advocating for a low threshold hard forks is a step in the wrong direction if we are trying to limit the "central planning" of any particular implementation. However I do not believe that is the main concern of the Bitcoin Classic devs.
To switch gears a bit, Mike is ironically concerned China "controls" Bitcoin, but wanted to implement a block size increase that would only increase their relative control (via increased orphans). Until the p2p wire protocol is significantly improved (IBLT, etc...), there is very little room (if any at all) to raise the block size without significantly increasing orphan risk. This can be easily determined by looking at jtoomim's testnet network data that passed through normal p2p network, not the relay network.(3) In the mean time this will only get worse if no one picks up the slack on the relay network that Matt Corallo is no longer maintaining. (4)
Centralization is bad regardless of the block size, but Mike tries to conflate the centralization issues with the Blockstream block size side show for dramatic effect. In retrospect, it would appear that the initial lack of cooperation on a block size increase actually staved off increases in orphan risk. Unfortunately, this centralization metric will likely increase with the cooperation of Chinese miners and Bitcoin Classic if major strides to reduce orphan rates are not made.
Mike also manages to link to a post from the ProHashing guy RE forever-stuck transactions, which has been shown to generally be the result of poorly maintained/improperly implemented wallet software.(6) Ultimately Mike wants fees to be fixed despite the fact you can't enforce fixed fees in a system that is not centrally planned. Miners could decide to raise their minimum fees even when blocks are >1mb, especially when blocks become too big to reliably propagate across the network without being orphaned. What is the marginal cost for a tx that increases orphan risk by some %? That is a question being explored with flexcaps. Even with larger blocks, if miners outside the GFW fear orphans they will not create the bigger blocks without a decent incentive; in other words, even with a larger block size you might still end up with variable fees. Regardless, it is generally understood that variable fees are not preferred from a UX standpoint, but developers of Bitcoin software do not have the luxury of enforcing specific fees beyond basic defaults hardcoded to prevent cheap DoS attacks. We must expose the user to just enough information so they can make an informed decision without being overwhelmed. Hard? Yes. Impossible. No.
Shifting gears, Mike states that current development progress via segwit is an empty ploy, despite the fact that segwit comes with not only a marginal capacity increase, but it also plugs up major malleability vectors, allows pruning blocks for historical data and a bunch of other fun stuff. It's a huge win for unconfirmed transactions (which Mike should love). Even if segwit does require non-negligible changes to wallet software and Bitcoin Core (500 lines LoC), it allows us time to improve block relay (IBLT, weak blocks) so we can start raising the block size without fear of increased orphan rate. Certainly we can rush to increase the block size now and further exacerbate the China problem, or we can focus on the "long play" and limit negative externalities.
And does segwit help the Lightning Network? Yes. Is that something that indicates a Blockstream conspiracy? No. Comically, the big blockians used to criticize Blockstream for advocating for LN when there was no one working on it, but now that it is actively being developed, the tune has changed and everything Blockstream does is a conspiracy to push for Bitcoin's future as a dystopic LN powered settlement network. Is LN "the answer?" Obviously not, most don't actually think that. How it actually works in practice is yet to be seen and there could be unforseen emergent characteristics that make it less useful for the average user than originally thought. But it's a tool that should be developed in unison with other scaling measures if only for its usefulness for instant txs and micropayments.
Regardless, the fundamental divide rests on ideological differences that we all know well. Mike is fine with the miner-only validation model for nodes and is willing to accept some miner centralization so long as he gets the necessary capacity increases to satisfy his personal expectations for the immediate future of Bitcoin. Greg and co believe that a distributed full node landscape helps maintain a balance of decentralization in the face of the miner centralization threat. For example, if you have 10 miners who are the only sources for blockchain data then you run the risk of undetectable censorship, prolific sybil attacks, and no mechanism for individuals to validate the network without trusting a third party. As an analogy, take the tor network: you use it with an expectation of privacy while understanding that the multi-hop nature of the routing will increase latency. Certainly you could improve latency by removing a hop or two, but with it you lose some privacy. Does tor's high latency make it useless? Maybe for watching Netflix, but not for submitting leaked documents to some newspaper. I believe this is the philosophy held by most of the core development team.
Mike does not believe that the Bitcoin network should cater to this philosophy and any activity which stunts the growth of on-chain transactions is a direct attack on the protocol. Ultimately however I believe Greg and co. also want Bitcoin to scale on-chain transactions as much as possible. They believe that in order for Bitcoin to increase its capacity while adhering to acceptable levels of decentralization, much work needs to be done. It's not a matter of if block size will be increased, but when. Mike has confused this adherence to strong principles of decentralization as disingenuous and a cover up for a dystopic future of Bitcoin where sidechains run wild with financial institutions paying $40 per transaction. Again, this does not make any sense to me. If banks are spending millions to co-op this network what advantage does a decentralized node landscape have to them?
There are a few roads that the community can take now: one where we delay a block size increase while improvements to the protocol are made (with the understanding that some users may have to wait a few blocks to have their transaction included, fees will be dependent on transaction volume, and transactions <$1 may be temporarily cost ineffective) so that when we do increase the block size, orphan rate and node drop off are insignificant. Another is the immediate large block size increase which possibly leads to a future Bitcoin which looks nothing like it does today: low numbers of validating nodes, heavy trust in centralized network explorers and thus a more vulnerable network to government coercion/general attack. Certainly there are smaller steps for block size increases which might not be as immediately devastating, and perhaps that is the middle ground which needs to be trodden to appease those who are emotionally invested in a bigger block size. Combined with segwit however, max block sizes could reach unacceptable levels. There are other scenarios which might play out with competing chains etc..., but in that future Bitcoin has effectively failed.
As any technology that requires maintenance and human interaction, Bitcoin will require politicking for decision making. Up until now that has occurred via the "vote download" for software which implements some change to the protocol. I believe this will continue to be the most robust of options available to us. Now that there is competition, the Bitcoin Core community can properly advocate for changes to the protocol that it sees fit without being accused of co-opting the development of Bitcoin. An ironic outcome to the situation at hand. If users want their Bitcoins to remain valuable, they must actively determine which developers are most competent and have their best interests at heart. So far the core dev community has years of substantial and successful contributions under its belt, while the alt implementations have a smattering of developers who have not yet publicly proven (besides perhaps Gavin--although his early mistakes with block size estimates is concerning) they have the skills and endurance necessary to maintain a full node implementation. Perhaps now it is time that we focus on the personalities who many want to trust Bitcoin's future. Let us see if they can improve the speed at which signatures are validated by 7x. Or if they can devise privacy preserving protocols like Confidential Transactions. Or can they figure out ways to improve traversal times across a merkle tree? Can they implement HD functionality into a wallet without any coin-crushing bugs? Can they successfully modularize their implementation without breaking everything? If so, let's welcome them with open arms.
But Mike is at R3 now, which seems like a better fit for him ideologically. He can govern the rules with relative impunity and there is not a huge community of open source developers, researchers and enthusiasts to disagree with. I will admit, his posts are very convincing at first blush, but ultimately they are nothing more than a one sided appeal to the those in the community who have unrealistic or incomplete understandings of the technical challenges faced by developers maintaining a consensus critical, validation-heavy, distributed system that operates within an adversarial environment. Mike always enjoyed attacking Blockstream, but when survey his past behavior it becomes clear that his motives were not always pure. Why else would you leave with such a nasty, public farewell?
To all the XT'ers, btc'ers and so on, I only ask that you show some compassion when you critique the work of Bitcoin Core devs. We understand you have a competing vision for the scaling of Bitcoin over the next few years. They want Bitcoin to scale too, you just disagree on how and when it should be done. Vilifying and attacking the developers only further divides the community and scares away potential future talent who may want to further the Bitcoin cause. Unless you can replace the folks doing all this hard work on the protocol or can pay someone equally as competent, please think twice before you say something nasty.
As for Mike, I wish you the best at R3 and hope that you can one day return to the Bitcoin community with a more open mind. It must hurt having your software out there being used by so many but your voice snuffed. Hopefully one day you can return when many of the hard problems are solved (e.g. reduced propagation delays, better access to cheap bandwidth) and the road to safe block size increases have been paved.
(*) https://eprint.iacr.org/2014/763.pdf
(q) https://github.com/bitcoinclassic/bitcoinclassic/pull/6
(b) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012026.html
(c) https://github.com/bitcoinclassic/bitcoinclassic/pull/1#issuecomment-170299027
(d) http://toom.im/jameshilliard_classic_PR_1.html
(0) http://bitcoinstats.com/irc/bitcoin-dev/logs/2016/01/06
(1) https://github.com/bitcoin/bitcoin/graphs/contributors
(2) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012014.html
(3) https://toom.im/blocktime (beware of heavy website)
(4) https://bitcointalk.org/index.php?topic=766190.msg13510513#msg13510513
(5) https://news.ycombinator.com/item?id=10774773
(6) http://rusty.ozlabs.org/?p=573
edit, fixed some things.
edit 2, tried to clarify some more things and remove some personal bias thanks to astro
submitted by citboins to Bitcoin [link] [comments]

[Index] Scaling Conference Overview: Day 1

Conference Topic \ Speaker \ Time Link
Privacy \ Fungibility
Scalability
Smart Contracts
Proof of Work
submitted by KarmaNote to Bitcoin [link] [comments]

The Big Blocks Mega Thread

Since this is a pressing and prevalent issue, I thought maybe condensing the essential arguments into one mega thread is better than rehashing everything in new threads all the time. I chose a FAQ format for this so a certain statement can be answered. I don't want to re-post everything here so where appropriate I'm just going to use links.
Disclaimer: This is biased towards big blocks (BIP 101 in particular) but still tries to mention the risks, worries and fears. I think this is fair because all other major bitcoin discussion places severely censor and discourage big block discussion.
 
What is the block size limit?
The block size limit was introduced by Satoshi back in 2010-07-15 as an anti-DoS measure (though this was not stated in the commit message, more info here). Ever since, it has never been touched because historically there was no need and raising the block size limit requires a hard fork. The block size directly limits the number of transactions in a block. Therefore, the capacity of Bitcoin is directly limited by the block size limit.
 
Why does a raise require a hard fork?
Because larger blocks are seen as invalid by old nodes, a block size increase would fork these nodes off the network. Therefore it is a hard fork. However, it is possible to downsize the block limit with a soft fork since smaller blocks would still be seen as valid from old nodes. It is considerably easier to roll out a soft fork. Therefore, it makes sense to roll out a more ambitious hard fork limit and downsize as needed with soft forks if problems arise.
 
What is the deal with soft and hard forks anyways?
See this article by Mike Hearn: https://medium.com/@octskyward/on-consensus-and-forks-c6a050c792e7#.74502eypb
 
Why do we need to increase the block size?
The Bitcoin network is reaching its imposed block size limit while the hard- and software would be able to support more transactions. Many believe that in its current phase of growth, artificially limiting the block size is stifling adoption, investment and future growth.
Read this article and all linked articles for further reading: http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
Another article by Mike Hearn: https://medium.com/@octskyward/crash-landing-f5cc19908e32#.uhky4y1ua (this article is a little outdated since both Bitcoin Core and XT now have mempool limits)
 
What is the Fidelity Effect?
It is the Chicken and Egg problem applied to future growth of Bitcoin. If companies do not see how Bitcoin can scale long term, they don't invest which in turn slows down adoption and development.
See here and here.
 
Does an increase in block size limit mean that blocks immediately get larger to the point of the new block size limit?
No, blocks are as large as there is demand for transactions on the network. But one can assume that if the limit is lifted, more users and businesses will want to use the blockchain. This means that blocks will get bigger, but they will not automatically jump to the size of the block size limit. Increased usage of the blockchain also means increased adoption, investment and also price appreciation.
 
Which are the block size increase proposals?
See here.
It should be noted that BIP 101 is the only proposal which has been implemented and is ready to go.
 
What is the long term vision of BIP 101?
BIP 101 tries to be as close to hardware limitations regarding bandwidth as possible so that nodes can continue running at normal home-user grade internet connections to keep the decentralized aspect of Bitcoin alive. It is believed that it is hard to increase the block size limit, so a long term increase is beneficial to planning and investment in the Bitcoin network. Go to this article for further reading and understand what is meant by "designing for success".
BIP 101 vs actual transaction growth visualized: http://imgur.com/QoTEOO2
Note that the actual growth in BIP 101 is piece-wise linear and does not grow in steps as suggested in the picture.
 
What is up with the moderation and censorship on bitcoin.org, bitcointalk.org and /bitcoin?
Proponents of a more conservative approach fear that a block size increase proposal that does not have "developeexpert consensus" should not be implemented via a majority hard fork. Therefore, discussion about the full node clients which implement BIP 101 is not allowed. Since the same individuals have major influence of all the three bitcoin websites (most notably theymos), discussion of Bitcoin XT is censored and/or discouraged on these websites.
 
What is Bitcoin XT anyways?
More info here.
 
What does Bitcoin Core do about the block size? What is the future plan by Bitcoin Core?
Bitcoin Core scaling plan as envisioned by Gregory Maxwell: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe011865.html
 
Who governs or controls Bitcoin Core anyways? Who governs Bitcoin XT? What is Bitcoin governance?
Bitcoin Core is governed by a consensus mechanism. How it actually works is not clear. It seems that any major developer can "veto" a change. However, there is one head maintainer who pushes releases and otherwise organizes the development effort. It should be noted that the majority of the main contributors to Bitcoin Core are Blockstream employees.
BitcoinXT follows a benevolent dictator model (as Bitcoin used to follow when Satoshi and later Gavin Andresen were the lead maintainers).
It is a widespread believe that Bitcoin can be separated into protocol and full node development. This means that there can be multiple implementations of Bitcoin that all follow the same protocol and overall consensus mechanism. More reading here. By having multiple implementations of Bitcoin, single Bitcoin implementations can be run following a benevolent dictator model while protocol development would follow an overall consensus model (which is enforced by Bitcoin's fundamental design through full nodes and miners' hash power). It is still unclear how protocol changes should actually be governed in such a model. Bitcoin governance is a research topic and evolving.
 
What are the arguments against a significant block size increase and against BIP 101 in particular?
The main arguments against a significant increase are related to decentralization and therefore robustness against commercial interests and government regulation and intervention. More here (warning: biased Wiki article).
Another main argument is that Bitcoin needs a fee market established by a low block size limit to support miners long term. There is significant evidence and game theory to doubt this claim, as can be seen here.
Finally, block propagation and verification times increase with an increased block size. This in turn increases the orphan rate of miners which means reduced profit. Some believe that this is a disadvantage to small miners because they are not as well connected to other big miners. Also, there is currently a large miner centralization in China. Since most of these miners are behind the Great Firewall of China, their bandwidth to the rest of the world is limited. There is a fear that larger block propagation times favor Chinese miners as long as they have a mining majority. However, there are solutions in development that can drastically reduce block propagation times so this problem will be less of an issue long term.
 
What is up with the fee market and what is the Lightning Network (LN)?
Major Bitcoin Core developers believe that a fee market established by a low block size is needed for future security of the bitcoin network. While many believe fundamentally this is true, there is major dispute if a fee market needs to be forced by a low block size. One of the main LN developers thinks such a fee market through low block size is needed (read here). The Lightning Network is a non-bandwidth scaling solution. It uses payment channels that can be opened and closed using Bitcoin transactions that are settled on the blockchain. By routing transactions through many of these payment channels, in theory it is possible to support a lot more transactions while a user only needs very few payment channels and therefore rarely has to use (settle on) the actual blockchain. More info here.
 
How does LN and other non-bandwidth scaling solutions relate to Bitcoin Core and its long term scaling vision?
Bitcoin Core is headed towards a future where block sizes are kept low so that a fee market is established long term that secures miner incentives. The main scaling solution propagated by Core is LN and other solutions that only sometimes settle transactions on the main Bitcoin blockchain. Essentially, Bitcoin becomes a settlement layer for solutions that are built on top of Bitcoin's core technology. Many believe that long term this might be inevitable. But forcing this off-chain development already today seems counterproductive to Bitcoin's much needed growth and adoption phase before such solutions can thrive. It should also be noted that no major non-bandwidth scaling solution (such as LN) has been tested or even implemented. It is not even clear if such off-chain solutions are needed long term scaling solutions as it might be possible to scale Bitcoin itself to handle all needed transaction volumes. Some believe that the focus on a forced fee market by major Bitcoin Core developers represents a conflict of interest since their employer is interested in pushing off-chain scaling solutions such as LN (more reading here).
 
Are there solutions in development that show the block sizes as proposed via BIP 101 are viable and block propagation times in particular are low enough?
Yes, most notably: Weak Blocks, Thin Blocks and IBLT.
 
What is Segregated Witness (SW) and how does it relate to scaling and block size increases?
See here. SW among other things is a way to increase the block size once without a hard fork (the actual block size is not increased but there is extra information exchanged separately to blocks).
 
Feedback and more of those question/answer type posts (or revised question/answer pairs) appreciated!
 
ToDo and thoughts for expansion:
@Mods: Maybe this could be stickied?
submitted by BIP-101 to btc [link] [comments]

It is time to usher in a new phase of Bitcoin development - based not on crypto & hashing & networking (that stuff's already done), but based on clever refactorings of datastructures in pursuit of massive and perhaps unlimited new forms of scaling

Debates among devs are normal and important.
Debates between programmers are the epitome of decentralized development and as such they are arguably the most important mechanism that will ensure the ongoing success of the Bitcoin (or cryptocurrencies) project.
Therefore, we would be wise to encourage such debates, rather than trying to make them go away by calling them "personal attacks".
In the real world, there aren't a whole lot of different ways to hammer a nail into a board or pour cement into a hole - but in the abstract world of mathematics and programming, there are many, many different ways to represent and manipulate a data structure, limited only by our imaginations, so it is actually appropriate to expect and even demand lots of jostling and critiquing from our programmers as they "try to invent a better mousetrap."
In fact, this is the kind of informal jockeying and shop talk that always has gone on and always will go on among mathematicians and programmers - and quite rightly so, because it is precisely the mechanism whereby they maintain order among their ranks, by making subtle and cogent observations about who knows what.
A famous example of this typical sort of jockeying and shop talk can be seen elsewhere in the ongoing debates between programmers of the "procedural" / "object-oriented" school (C/C++, Java) versus the "functional" school (Haskell, ML). It's always quite an eye-opener for a procedural programmer who's been using "loops" all their life, when they finally discover how to use an "iterator" in functional programming. They both "accomplish" the same thing of course - but in radically and subtly different ways, since an iterator in a functional language is a "first-class citizen" which can be passed around as an argument parameterizing a function, etc. - allowing much more compact and expressive (and sometimes even more efficient) code.
Different Bitcoin dev skill sets are required for different stages of Bitcoin's life cycle
An example of the debate between various devs can be seen here:
It is "clear that Greg Maxwell actually has a fairly superficial understanding of large swaths of computer science, information theory, physics and mathematics."- Dr. Peter Rizun (managing editor of the journal Ledger)
https://np.reddit.com/btc/comments/3xok2o/it_is_clear_that_greg_maxwell_unullc_actually_has/
What Peter R is saying here is simply that a different skill set is needed to usefully contribute to Bitcoin development now that it has moved well beyond its "proof-of-concept and initial rollout" stages (hey, this thing actually works) and is now trying to move into its "massive scaling" stages (let's try to roll this thing out to millions or billions of people).
Bitcoin's "proof-of-concept and initial rollout" stages
Initially, during the "proof-of-concept and initial rollout" stages, the skill set that was required to be a "Bitcoin dev" merely involved knowing enough cryptography, hashing, networking, "game theory", rudimentary economics, and C/C++ programming in order to be able to understand Satoshi's original vision and implementation, doing some simple and obvious refactorings, cleanups and optimizations while respecting the overall design decisions captured in the original C/C++ code, and maintaining the brilliant "game theory" incentives baked therein - the most notable of all being of course that thing which some mathematicians have taken to calling "Nakamoto Consensus" (which could be seen as a useful emerging mathematical-historical term along the lines of Nash Equilibrium, etc.) - ie, Satoshi's brilliant cobbling-together of several existing concepts from crypto and hashing and game theory and rudimentary economics in order to provide a good-enough solution to the long-standing Byzantine Generals Problem which mathematicians and programmers had heretofore (for decades) considered to be unsolvable.
In particular, during the "proof-of-concept and initial rollout" stages, the crypto and hashing stuff is all pretty much done: the elliptic-curve cryptography has been decided upon (and by the way Satoshi very carefully managed to pick one of the few elliptic curves that is NSA-proof) and the various hashing algorithms (SHA, RIPE) are actually quite old from previous work, and the recipe for combining them all together has been battle-tested and it should work fine for the next few decades or so (assuming that practical quantum computing is probably not going come along on that time scale).
Similar, during the "proof-of-concept and initial rollout" stages, the networking and incentives and game theory are all pretty much done: the way the mempool gets relayed, the way miners race to solve blocks while trying to minimize orphaning, and the incentives provided currently mainly by the coinbase subsidy and to be provided much later (after more halvings and/or more increases in volume and price) mainly by transaction fees - this stuff has also been decided upon, and is working well enough (within the parameters of our existing imperfect regulatory and economic landscape and networking topology, where things such as ASIC chips, cheap electricity and cooling in China, and the Great Firewall of China have come to the fore as major factors driving decisions about who mines where).
Bitcoin's "massive scaling" stages
Now, as we attempt to enter the "massive scaling" stage, a different skill set is required. As I've outlined above, the crypto and the hashing and the incentives are all pretty much done now - and mining has become concentrated where it's most profitable, and we are actually starting to hit the "capacity ceiling" a few times (up till now just some spam attacks and stress tests - but soon, more worryingly, possibly even with the next few months, really hitting the capacity ceiling with "real" transactions).
Early scaling debates centered around blocksize
And so, for the past year, we've gone through the never-ending debates on scaling - most of them focusing up till now (perhaps rather naïvely, some have argued) on the notion of "maximum blocksize", which was set at 1 MB by Satoshi as a temporary anti-spam kludge.
The smallblock proponents have been claiming that pretty much all "scaling solutions" based on simply increasing the maximum blocksize could have bad effects such as decreasing the number of nodes (decreasing this important type of decentralization) or increasing the number of orphans (decreasing profits for certain miners) - so they have been quite adamant in resisting any such proposals.
Meanwhile the bigblock proponents have been claiming that increased adoption (higher price and volume) should be more than enough to eventually offset / counteract any supposed decrease in node count and miner profits that might happen immediately after bigblocks would be rolled out.
For the most part, both sides appear to be arguing in good faith (with the possible exception of private companies hoping to be able to peddle future, for-profit "solutions" to the "problem" of artificially scarce level-one on-chain block space - eg, Blockstream's Lightning Network) - so the battles have raged on, the community has become divided, and investors are becoming hesitant.
New approaches transcending the blocksize debates
In this mathematical-historical context, it is important to understand the fundamental difference in approach taken by Peter__R. He is neither arguing for smallblocks nor for bigblocks nor for a level-2 solution. He is instead (with his recently released groundbreaking paper on Subchains - not to be confused with sidechains or treechains =) sidestepping and transcending those approaches to focus on an entirely different, heretofore largely unexplored approach to the problem - the novel concept of "nested subchains":
By nesting subchains, weak block confirmation times approaching the theoretical limits imposed by speed-of-light constraints would become possible with future technology improvements.
Now, this is a new paper, and it will still undergo a lot of peer review before we can be sure that it can deliver on what it promises. But at first glance, it is very promising - not least of all because it is attacking the whole problem of "scaling" from a new and possibly highly productive angle: not involving bigblocks or smallblocks or bolt-ons (LN) but instead examining the novel possibility of decomposing the monolithic "blocks" being appended to the "chain" into some sort of "substructures" ("subchains"), in the hopes that this may permit some sort of efficiencies and economies at the network relay level.
"Substructural refactoring"-based approaches
So what we are seeing here is essentially a different mathematical technique being applied, for the first time, to a different part of the problem in an attempt to provide a "massive scaling" solution for Bitcoin. (I'm not sure what to call this technique - but the name "substructural refactoring" is the first thing that comes to mind.)
While there had indeed been some sporadic discussions among existing devs along the lines of "weak blocks" and "subchains", this paper from Peter R is apparently the first time that anyone has made a comprehensive attempt to tie all the ideas together in a serious presentation including, in particular, detailed analysis of how subchains would dovetail with infrastructure (bandwidth and processing) constraints and miner incentives in order for this to actually work in practice.
Graphs reminiscent of elasticity and equilibrium graphs from economics
For example, if you skim through the PDF you'll see the kinds of graphs you often see in economics papers involving concepts such as elasticity and equilibrium and optimization (eg, a graph where there's a "gap" between two curves which we're hoping will decrease in size, or another graph where there's a descending curve and an ascending curve which intersect at some presumably optimum point).
Now, you can see from the vagueness of some my arguments and illustrations above that I am by no means an expert in the mathematics and economics involved here, but am instead merely a curious bystander with only a hobbyist's understanding of these complex subjects (although a rather mature one at that, having worked most of my long and chequered career in math and programming and finance).
But I am fairly confident that what we are seeing here is the emergence of a new sort of "skill set" which will be needed from the kind of Bitcoin developers who can lead us to a successful future where millions or billions of people (and perhaps also machines) are able to transact routinely and directly on the blockchain.
And if a developer like Peter R wants to direct some criticism at another developer who has failed to have these insights, I think that is a natural manifestation of human ego and competitiveness which is healthy to keep these guys on their toes.
A new era of Bitcoin development
The time for tweaking the crypto and hashing is long past - which means that the skills of guys like nullc and petertodd may no longer as important as they were in the past. (In fact, there are entirely other objections can be raised against Peter Todd, given his proclivity for proving that he can, at the mathematical level, break systems which actually do work "good enough" by relying on constraints imposed at the "social level" - a level which PTodd evidently does not much believe in. For the most egregious example of this, see his decision to force his Opt-In (soon to become On-By-Default) Full RBF - which breaks existing "good-enough" risk mitigation practices many business had up till now relied on to profitably use zero-conf for retail.)
Likewise the skills of adam3us may also not be as important as they were in the past: he is, after all, the guy who invented ecash, so he is clearly a brilliant cryptographer and pioneer cypherpunk who laid the groundwork for what Bitcoin has become today, but it is unclear whether he now has (or ever had) the vision to appreciate how big (and fast) Bitcoin can become (at "level 1" - ie, directly on the blockchain itself).
In this regard, it is important to point out the serious lack of vision and optimism on the part of nullc and petertodd and adam3us:
TL;DR: Times are a-changin'. The old dev skill sets for Bitcoin's early years (crypto, hashing, networking) are becoming less important, while new dev skill sets are becoming more important (such as something one might call "substructural refactoring"). We should encourage competition as new devs emerge who have these new skill sets, because they may be the way out of the "dead end" of the blocksize-based approaches to scaling, opening up massive and perhaps unlimited new forms of "fractal-like" scaling instead.
submitted by ydtm to btc [link] [comments]

IMPORTANT! BITCOIN PRICE IS DOWN, BUT BTC IS NOT OUT! Bitcoin Price Predictions Bitcoin Curve ball - Current Bitcoin Price [May 26th 2020] BITCOIN PRICE ABOUT TO EXPLODE? 13 DAYS OR LESS? Realistic Bitcoin Price Prediction by the end of 2020 and 2021

Bitcoin is a settlement network, clear and simple. The analogy with Bitcoin being a digital form of cash payments sounds nice, but it's false and breaks down right quick: When I give you a dollar bill, I don't need to wait ~10 minutes for you to receive it, nor does the entire world see that transaction. The current Scaling Bitcoin Workshop will take place 九月 12日-13日 at the Centre Mont-Royal (Salon Mont-Royal I, 4 th floor) 2200 rue Mansfield Montréal Québec, H3A 3R8 . We are accepting technical proposals for improving Bitcoin performance including designs, experimental results, and comparisons against other proposals. With IBLT and Weak Blocks, the latency bottleneck on Bitcoin’s ability to scale is replaced with a bandwidth bottleneck. Back noted, “The network actually has excess bandwidth.” The improvements in block latency made possible by these two improvements are accomplished by spreading out bandwidth usage over time. From the perspective of participating in this network as an independent auditor, I must consider the possibility of the block size growing 1000-fold - or any size which prohibits my ability to validate the network - in the next year or whatever (had there been no limit) likely. What a curious... Minisketch is a new solution that’s trying to solve an old problem.. Spearheaded by Blockstream co-founder Pieter Wuille, Bitcoin Core contributor and fellow Blockstream co-founder Gregory Maxwell, and Blockstream software engineer Gleb Naumenko, the open-source initiative is designed to achieve set reconciliation between the mempools of each full node.

[index] [5127] [6504] [13374] [9732] [11007] [12580] [2745] [21561] [10692] [28449]

IMPORTANT! BITCOIN PRICE IS DOWN, BUT BTC IS NOT OUT!

Bitcoin Price Predictions From Zero to Millions Experts Opinions - Duration: 22:16. Aimstone 49,557 views. 22:16. WARNING!!!!! BITCOIN 50% DUMP IN 10 DAYS!!!??? + A Secret 40% Altcoin Trade - ... Bitcoin The Price Is RIGHT! June 2020 Price Prediction & News Analysis - Duration: 38:19. Krown's Crypto Cave 8,287 views. 38:19. Bitcoin WHEN REFUND SIR?! The current prices of Bitcoin $8,893.56 The current prices of Ethereum $202.58 The current prices of Dash $72.61 The current prices of Litecoin $42.58 These are the current prices of ... I will address this question in this video Also we will look at the realistic bitcoin price prediction by the end of 2020 and beyond. Get $10 when Sing up with DueDex Get $50 for the first time ... What are you supposed to do with $16k targets and $6k targets? What are the possibilities of bitcoin price reaching each target? Are you comfortable with either scenario? In this video, we ask the ...

Flag Counter