Bitcoin Mining Explained Like You’re Five: Part 2

A few questions about bitcoin mining

Newbie here.
I'm not yet familiar with bitcoin mining, just a bit interested. The bitcoin block header, as we all know, is consisted of the nonce, the timestamp, the Merkle root, nBits, and the hash of the previous block. Miners usually increment the nonce by 1, until they exhaust all 2^32 possibilities and find the solution.
However, I have read that it is very common for miners to exhaust all 2^32 combinations and not find a solution at all. As a result, they have to make slight changes to the timestamp and/or the Merkle root to calculate even more combinations.
Therefore, what is the probability of a miner exhausting 2^32 combinations without finding a valid nonce in a specific block? Does it have something to do with the bitcoin mining "difficulty" thingy? I'm so confused right now......
submitted by Palpatine88888 to Bitcoin [link] [comments]

Where does the ASIC get the nonce from?

Every time the miner tries to hash (the block?) it uses a nonce (random number) How is this chosen? Randomly? in sequence? in sequence from a certain point? Who assigns it?
The ASIC? The mining software? [the cpu] The pool? If the pool - what about peer-pooling?
AND could this be improved to reduce 'wasted' work globally - or to the advantage of the pool?
Thanks for any answers!
I am becoming more concerned that there are only incentives to centralise, and not de-centralise ... which works against bitcoin's nature (and strengths)
submitted by inteblio to BitcoinMining [link] [comments]

Jehuti vs Dicebot Part II

I don't know if you all were here for my last post but if you were, you would know that it didn't receive alot of attention. That is okay because I brought pictures today.
Now I know that alot of bitcoin strategies aren't really all the rage and even really work. Martingale is only good for medium-high chance rolls anyway. One particular strategy never works until it does. You feel me? This particular startegy is one that I came up with myself from just a humble excel sheet. From there, I remade the sequence from the excel sheet to fit Visual Basic NET so that the values I generate will not be bound to only 500 consecutive losses or your old satoshi total. I am not gonna hype this post up or give you an excel sheet and tell you read it. All I'm gonna tell you is that it works for the simple fact that I want you all to know what it does and how to run the program at home yourself.
INTRO For those that don't know me yet, my dev name is either Darth Jehuti or Demon Jehuti. But Jehuti is cool for time sake. Nice to meet you all. I've been into bitcoin for about a few years and I can say that I love the craze of currency that you can spend virtually anywhere. But I'm not gonna get into this introduction too much. Just know I develop whatever software anyone needs and I can code it in any syntax.
JEHUTI VS. DICEBOT. This program is one that I conceived out of countless hours of researching how the bitcoin casinos actually work. I'm not gonna get into all of the juicy shit but I will go thru with the basics for those that already gamble on these sites. The program is dynamic and will always spit out the many values for the custom strategy. More on that later. Jehuti VS Dicebot gambles two ways.
The first way is based off taking losses consecutively. I know how that sounds but it isn't as crazy as you might think. See, if you have a percentage chance that yields a 6x multiplier, you can only bet 1 satoshi 5 times. Then 2 satoshi twice and so on. My formula is if |accumulative loss| < bet × multiplier, do bet. (That is the absolute value of accumulative loss is less than bet times the multiplier.) Otherwise, increment bet by 1 and try formula again. If you were here for my last post, you would know that I suggested that one should start with 20000-50000 satoshis. But I, myself, started with 50000 and I'm currently at 73576 after about a day and a half. The longest time this program has run at once has been 21 hours straight. It never dropped to zero. I only turned it off because I like watching my Ray Donovan on Showtime, dammit. I've only run this on the Bitsler site but should run on other bitcoin dice sites without a problem. Once the total hits 120000 satoshis or more, it will start to bet 2 to increase the winnings faster. 60000 satoshis with 1 as a wager equals the same consecutive loss as 120000 satoshis with 2 as a wager. That makes sense, right?
The second way that JvD makes every roll count is rolling at a medium-high change like 60-75% and multiplying the bet on loss by what can replace the last bet plus a tiny decimal point amount to gain a little more as well. This was just a basic strategy that didn't take much thought as the last and timeouts according to the random range and how much satoshi you've accrued.
These two methods are both decided by a random number between 1-100.
SEUNTJIE'S DICEBOT. If you've heard of this particular program, that's good. Seuntjie has a custom sequence function that my program suite translates to. I.E. 0.00000001&7.77 This program is also useful for changing the dice rolling server seed. My program suite has a bunch of random timeouts purposely there to make sure the nonce of the current server seed isn't rolling too high or too many times. All you do is login to your designated site on this program, then open my JvD main robot. First, the main robot will decide which strategy to use and then at what percentage, wager, stop on win timeout, etc. Second, the main robot will grab the current satoshi total of your chosen bitcoin casino account and then call the secondary JvD robot to generate the new value lines asked for by the main. Then, once the main stores these lines in the clipboard, it will paste these lines into the custom sequence of this Seuntjie Dicebot program. That's it. Let it run all day if you desire. I am currently working to support this suite in low-RAM virtual machines to have them run in the absolute background so you can work on the main OS attentively and uninterrupted.
I will report back with another post once this hits 120000 satoshis to give you more insight about my program suite and to answer any questions that you might have regarding what it is that I'm doing and how I am doing it.
I'm not asking you to take this seriously. I am not gonna give you my donation link. I am not gonna scam you. I only like beating machines and I am simply sharing my success thus far into this venture of "casino mining". Stay tuned for my next post about how far I've gotten. And lastly, to all of you I say, please stay awesome! :)
submitted by DarthJehuti to u/DarthJehuti [link] [comments]

Blockchain & mining - my attempt to explain it

There are so many people invested in crypto now, but there are still quite a lot of people who don’t actually know what a “Blockchain” really is, nor do they truly understand its usefulness.
 
People hear these phrases like “digital ledger secured using cryptography” and think it sounds cool, but what exactly does that mean?
 
There are literally tons of informational resources on the net, but most of them fly straight over the heads of the average Joe. I thought it would be worth breaking down the concept of “Blockchain” to make it easy for anyone to understand.
 
So first and foremost, what is a “block” in a Blockchain? Well a block is a bunch of transactions grouped together. When I say “transactions”, I am referring to a ledger or list of transactional information.
 
Let me offer an example of a “transaction”:
 
Joe has $1000
Joe’s bank account is 1234-5678 @ HSBC
Joe sends Sarah $200
Sarah has $2000
Sarah’s bank account is 8765-4321 @ Bank of China
The time of the transaction is 12:47pm 20th Feb 2018
Joe’s account will now be $800
Sarah’s bank account is $2200
 
This is a simple example, but fundamentally this short list of information pertaining to a single transaction. This transferral of money ($200 from one person to another) is added to a “block” alongside a whole bunch of other transactions from other people.
 
Let’s use Bitcoin for the remaining examples. Each “block” on the bitcoin blockchain is 1mb in length. So what exactly is 1mb? Well 1mb or “mega-byte”, represents one million bytes of information. Now one “byte” of information represents a single ascii character. Every single character I am typing right now represents one byte. So “Hello” (without the quotations) represents 5 bytes of information.
 
So if we go back to my example transaction above, the number of bytes that this transaction took up is 246 bytes. This is just a fraction of 1mb, so you can see a lot of transactions of this size could be stored in a 1mb block.
 
OK so hopefully you understand what a “block” at least represents. So the next question would be, how do you ensure this “block” of information has not been tampered with? After all, it would be utterly disastrous if someone were to access a block of information and change some of the information. Imagine changing the destination bank address, or the amounts involved!
 
In order to secure a “block” we use cryptography. Specifically we use something called a “hash”. A hash essentially takes a bunch of data, applies a fixed set of mathematical operations to the data, and the eventual output is a “hash” of the data.
 
Let me give you an example of an ultra-basic “hash algorithm” -
 
Step 1. Take a number and double it
Step 2. Add 6
Step 3. Divide it by 2
 
That’s it…. A basic hash algorithm!
 
Let’s take a couple of numbers and apply the hash algorithm to the numbers.
 
First we’ll start with 20
 
Step 1. 20 x 2 = 40
Step 2. 40 + 6 = 46
Step 3. 46 / 2 = 23
 
So in this example, the “hash” of the original number (20) is 23
 
Let’s apply it to another number….This time 22
 
Step 1. 22 x 2 = 44 Step 2. 44 + 6 = 50 Step 3. 50 / 2 = 25
 
So the “hash” of the original number (22) is now 25
 
Now any different number you try as your input will always produce a different number as your hashed output. However, if you apply my hashing algorithm to the number 20, the “hash” will always be 23, and if you apply it to the number 22, the “hash” will always be 25.
 
If we take the numbers I used in the above examples (20 & 22) as “inputs”, then the “output” (the hash) will always produce the same result, but any changes to the input will always affect the output.
 
Ok so that’s applying a hash to a number…..what about text? How do we “hash” a string of text?
 
Well that’s where something called the “Ascii Table” comes in. The Ascii Table offers a unique code for every alphanumeric character. This allows us to convert a string of text into a number. Let’s take the word “Hello” (without the quotes) and convert it to a number using the Ascii table.
 
Ascii Table : https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html
 
Capital H is represented as 72
Lower case e is represented as 101
Lower case l is represented as 108
Lower case l is represented as 108
Lower case o is represented as 111
 
If we concatenate these numbers we’d get 72101108108101
 
So we have a number…..lets apply my basic hashing algorithm to this number
 
Step 1. 72101108108101 x 2 = 144202216216202
Step 2. 144202216216202 + 6 = 144202216216208
Step 3. 144202216216208 / 2 = 72101108108104
 
So in this example, the “hash” of the word Hello is 72101108108104
 
If I changed any letter, the hash would be different. If I even changed the Captial H to a lower case h, the hash would be different. If anything at all changes the hash would be different.
 
So hopefully you understand the concept of hashing….. Now I should state that my example hashing algorithm is painfully simple. If would be trivial to reverse engineer this, simply by reversing the steps. However this is my example hash.
 
Let’s compare this to the SHA256 hash.
 
The SHA256 “hash” of the word “welcome” (without the quotes) is 280D44AB1E9F79B5CCE2DD4F58F5FE91F0FBACDAC9F7447DFFC318CEB79F2D02
 
If you apply the SHA256 hash algorithm to the word welcome, the hash will ALWAYS be 280D44AB1E9F79B5CCE2DD4F58F5FE91F0FBACDAC9F7447DFFC318CEB79F2D02
 
Try it yourself on a few different online SHA256 calculators:
 
http://www.xorbin.com/tools/sha256-hash-calculator
https://passwordsgenerator.net/sha256-hash-generato
http://www.md5calc.com/
 
So we know that if we apply the SHA256 hashing algorithm to the word welcome, we will of course always get the same result, because the steps involved in “hashing” data using SHA256 algorithm are publicly documented, albiet very complex.
 
However, the steps are far from the simple 3-step process I gave in my example…..Sha256 uses 64 steps, and none of them are as basic as the 3-step example I included of using plus, minus, multiply and divide.
 
I won’t go into the entire 64-step process (There are plenty of resources out there if you are interested) but just to give you an idea of the complexity of the hashing algorithm, I’ll go through the first few steps. But before we do this, we need to “prepare” the input.
 
To do this we first split the word into 4-byte chunks starting from the first character. The word "welcome" (without the quotes) contains 7 characters, so it is split into two chunks
 
Chunk A – welc
Chunk B - ome
 
Ok, now for each chunk, we convert this to ascii
 
Chunk A – welc = 119 101 108 99
Chunk B – ome = 111 109 101
 
Now we convert these values to a HEX value (for information on hex, take a look here : http://whatis.techtarget.com/definition/hexadecimal)
 
Chunk A – 119 101 108 99 = 77 65 6c 63
Chunk B – 111 109 101 = 6f 6d 65
 
Now any Chunk that is not a complete 4-bytes, needs to be “padded” to make it a complete 4-byte chunk. This padding always represents “80” in hex
 
Chunk A is fine….it's 4-bytes, so does not require any padding. Chunk B is only 3 bytes, so it needs an extra byte of padding. To do this we simply append hex 80 to the end.
 
So Chunk B becomes 6f 6d 65 80
 
The two binary values are now concatenated back together and padded out to create a 56 byte data string. They are padded out with zeros. Hex characters are represented with two characters, so 0 in hex is 00
 
So the two strings go together and lots of hex value zeros go on the end to make 56 bytes
 
77 65 6C 63 6F 6D 65 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 
We now calculate the length of the actual message in bytes including the padding (77 65 6C 63 6F 6D 65 80) and this is a total of 8 bytes, so this value of 8 (The number 8 is represented as 38 in hex) is appended to the very end of the 56 bytes to create a complete 64-byte string.
 
So the total 64-byte string has become:
 
77 65 6C 63 6F 6D 65 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 38
 
The 64 byte string is then converted to binary….
 
01110111 01100101 01101100 01100011 01101111 01101101 01100101 10000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00111000
 
In the data section (the first 56 bytes) the first byte of data (01110111 in binary) represents 77 in hex, which in turn represents the decimal value of 119, which is the ascii value of w
 
The second byte of data (01100101 in binary) represents 65 in hex, which in turn represents the decimal value of 101, which is the ascii value of e
 
In the final section, the very last byte of data (00111000 in binary) represents 38 in hex, which in turn represents the decimal value of 56, which is the ascii value of 8, which represents the length of the padded data string. This value will always be a multiple of 4.
 
Ok so now we’ve got that 64-byte data stream, we now apply some other things to it.
 
At this point Sha256 does some "shifting" of the data.
 
"Shifting" is when you move data around – So for example if we “shift” every square on the grid backwards 7 places, then this is what would happen.
 
10000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00111000 01110111 01100101 01101100 01100011 01101111 01101101 01100101
 
Ok so Sha256 does a few more rounds of shifting until eventually, the data has been moved around and looks completely different on the grid to what it started with.
 
After all this is done, only then is the data “prepared” and ready to be manipulated through the 64 steps to create the hash! Now on the face of things at first glance, this actually looks complicated, but for a computer to hash data using Sha256, it’s actually fairly simple. It can do it extremely quickly! A human being could in fact do the complete SHA256 hash with enough patience. Somewhere actually did this with a pen and paper and it took them a little over a day.
 
After the 64 rounds of adjustment, the final hashed value of welcome comes to 280D44AB1E9F79B5CCE2DD4F58F5FE91F0FBACDAC9F7447DFFC318CEB79F2D02 and providing that you used sha256 to hash it, the word welcome will always hash to this value. If I change the anything in the input, the output hash changes dramatically.
 
For example, if I change welcome to Welcome (capital W), the Sha256 hash becomes 0E2226B5235F0FF94A276EB4D07A3BFEA74B7E3B8B85E9EFCA6C18430F041BF8 As you can see it’s totally unrecognisable compared to the previous hash.
 
So hopefully now you have an understanding of hashing, you can see that the data stored in a block can be hashed, and it will generate a hash value.
 
Copy the following section of transaction text into any online SHA256 calculator:
 
Joe has $1000
Joe’s bank account is 1234-5678 @ HSBC
Joe sends Sarah $200
Sarah has $2000
Sarah’s bank account is 8765-4321 @ Bank of China
The time of the transaction is 12:47pm 20th Feb 2018
Joe’s account will now be $800
Sarah’s bank account is $2200
 
You should get the following hash value:
 
F4162A24257D3D2995E80B8FB08F43A9F029CC951F8C103051EAD30BFCDCC63F
 
Now this is just one transaction, but the point is that you will never see that same hash value again, unless the EXACT same transaction information is hashed with SHA256. If you change anything at all, the hash value will change completely.
 
Now I won’t go into why this is virtually impossible to reverse engineer, but suffice to say the estimates of computing power required to reverse a SHA256 hash are as follows:
 
Based on current computing power, brute-forcing SHA256 would take a powerful modern PC approximately 71,430,540,814,238,958,387,154 years. Some scientists believe the sun will “extinguish” in about 5,000,000,000 years.
 
For now, SHA256 is pretty secure!
 
So if we have a “hashed block”, suffice to say it is pretty much impossible to break.
 
So there we have it...a block!
 
OK so what does the word “chain” in blockchain mean?
 
Simple….. you take the hash value of the first block, and stick it into the very next block as the first part of data, just before you start adding your new transactions. Can you see what effect this has?
 
If my first block hash is:
 
F4162A24257D3D2995E80B8FB08F43A9F029CC951F8C103051EAD30BFCDCC63F
 
If I put this just in front of all my new transactional data, then the total data in the new block (including the hash of the previous block) all gets hashed as one to create a new hash for the second block. If anyone tampers with the first block, the hash changes, and therefore won’t match with the hash put into the second block. This has a knock-on effect to all subsequent blocks.
 
So if you have a block-chain full of nodes (servers) and node A is reporting a cumulative hash of all blocks on the latest block on the chain to be XXXXXX but node B, node C, and node D are reporting the cumulative hash for all blocks to be YYYYYYYY, then it’s immediately obvious that node A has been compromised, and needs to be removed….after all, the entire block chain of entries ultimately ends up with an up-to-date hash of all the previous blocks, and if anything changes…..literally one single character in any single block changes…..then hash proves that the chain has been compromised!
 
So what exactly is mining? Mining is simply re-running the hash over and over and over again onto a block, until you reach a constant…..What I mean by a constant is as follows:
 
  1. You take your block of data
  2. You hash it to get a hash value
  3. You check to see if the hash begins with four zeros 0000
  4. If it doesn’t you now add 1 to the data and re-hash
  5. You check to see if the hash begins with four zeros 0000
  6. If it doesn’t you now increment the number by one and re-hash
 
You now repeat steps 5 & 6 over and over and over again, until eventually, at some point, you will see 4 zeros.
 
This extra value you are adding is what is known as a “nonce” and is actually short for the word nonsense! It basically means that you are adding a number that increments in the block, whilst everything else in the block remains constant.
 
Let’s take a simple transaction to use as an example:
 
Fred has $200
Claire has $300
Joe sends Claire $50
Fred now has $150
Claire now has $350
 
Ok nice and simple….. Let’s use a great website resource to demonstrate mining this data.
 
Copy this basic transaction into the “data” section of this web page and delete any visible “nonce” value (if there is one there) - https://anders.com/blockchain/block.html
 
(NOTE: when you copy/paste from reddit it might also copy the spaces between the lines, so you would need to remove them, as a space is also a valid ascii character.)
 
If done correctly, you should see a hash value at the bottom of f710ba16e8b987575a23ce0fe13a4dfbd3e72676c65890a7b8acab421748195b
 
Now this doesn’t begin with 0000, so now let’s click on the "mine" button, and the page will keep incrementing the nonce value until eventually the hash will begin with 0000.
 
The process should take around 5-10 seconds, and eventually the hash will be displayed as 00009db80aa366297984130a3f2b74b4f3a6eb044df24de700a616ca9e6aacb6
 
This does begin with 0000 and it took 15,708 “hashes” to reach it. You have reached a constant!
 
This block would now be deemed as a valid block, and the hash of this block is what is passed onto the next block! This is basically mining!
Mining is necessary to ensure that all blocks on the block chain are valid and accurate. Obvioulsy doing this requires computational power, which requires equipment (computers) and energy (electricity) which must be paid for, hence the reason that "miners" are compensated with coins for their efforts.
 
So hopefully you now have a better understanding of block chains and mining :-)
submitted by jpowell79 to u/jpowell79 [link] [comments]

[For newbies]You’d Better Know 40 Jargons in Cryptocurrency World.

Many newbies may feel strange or even confused about various jargons when we step into cryptocurrency world for the first time. I read lots of information on the Internet and combined my understanding to sort out the 40 jargons and some useful questions that are common while mining. I will divide these into several parts. If there is something wrong in my description, please point it out directly, thank you very much!

1.Digital Currency
A digital currency is a form of currency that is available only in digital or electronic form, and not in physical form. It is also called digital money, electronic money, electronic currency, or cyber cash.Digital currency includes virtual currency, cryptocurrency, electronic money, and so on.

2.Cryptocurrency
A cryptocurrency is a digital or virtual currency that uses cryptography for security. A cryptocurrency is difficult to counterfeit because of this security feature. Many cryptocurrencies are decentralized systems based on blockchain technology, a distributed ledger enforced by a disparate network of computers. A defining feature of a cryptocurrency, and arguably its biggest allure, is its organic nature; it is not issued by any central authority, rendering it theoretically immune to government interference or manipulation.There are currently well over one thousand different cryptocurrencies in the world and many people see them as the lynchpin of a fairer, future economy.Countries have different definitions of cryptocurrencies, such as property, commodities, currency, virtual currency, etc.

3.Token
Tokens are different from bitcoins and altcoins in that they are not mined by their owners nor primarily meant to be traded (although they may be traded on exchanges if the company that issued them becomes valuable enough in the eyes of the public), but to be sold for fiat or cryptocurrency in order to fund the start-up's tech project.Moreover, the amount of token allocation is often determined in advance, such as how much of the token is allocated to the developer and how much is used for operations.

4.AltCoin
An altcoin is any digital cryptocurrency similar to Bitcoin. The term is said to stand for “alternative to Bitcoin” and is used describe any cryptocurrency that is not a Bitcoin. Altcoins are created by diverging from Bitcoin consensus rules (the fundamental rules of the cryptocurrency’s network) or by developing a new cryptocurrency from scratch.

5、Blockchain
A type of distributed digital ledger to which data is recorded sequentially and permanently in ’blocks’. Each new block is linked to the immediately previous block with a cryptographic signature, forming a ‘chain’. This tamper-proof selfvalidation of the data allows transactions to be processed and recorded to the chain without recourse to a third party certification agent. The ledger is not hosted in one location or managed by a single owner, but is shared and accessed by anyone with the appropriate permissions – hence ‘distributed’.Each of the computers in the distributed network maintains a copy of the ledger to prevent a single point of failure (SPOF) and all copies are updated and validated simultaneously.

6、Block
A package of data containing multiple transactions over a given period of time. A block is a record set of some or all of the latest bitcoin transactions and is not recorded by other previous blocks.

7. Block Header
A block header is used to identify a particular block on an entire blockchain and is hashed repeatedly to create proof of work for mining rewards.The head of the block is divided into six components:the version number of the software,the hash of the previous block( the hash of the previous block is contained in the hash of the new block, the blocks of the blockchain all build on each other),he root hash of the Merkle tree,the time in seconds since 1970–01–01 T00: 00 UTC,the goal of the current difficulty(The lower the goal in bits is, the harder it is to find a matching hash),the nonce(The nonce is the variable incremented by the proof of work. In this way, the miner guesses a valid hash, a hash that is smaller than the target.).As a part of a standard mining exercise, a block header is hashed repeatedly by miners by altering the nonce value. Through this exercise, they attempt to create proof of work, which helps miners get rewarded for their contributions to keep the blockchain system running.

8.Hashing
Hashing is the result of applying an algorithmic function to data in order to convert them into a random string of numbers and letters. This acts as a digital fingerprint of that data, allowing it to be locked in place within the blockchain.

9.Enesis Block
The genesis block is the first block in any blockchain-based protocol. It is the foundation on which additional blocks are sequentially added to form a chain of blocks, resulting in the term, blockchain being coined.The genesis block is also referred to as block zero. The second block to be added on top of block zero would then be referred to as block number one.

10. Block Height
The number used to refer to the ordering of blocks is known as the block height number. A blockchain contains a series of blocks, hence the block height number is always a positive integer greater than zero.

In the next few days,we will continue to post posts about jargons and some useful questions that are common while mining, please continue to follow our posts.
submitted by hashaltcoin to u/hashaltcoin [link] [comments]

Bitcoin questions

Hi Folks,
Giving myself a crash course in bitcoin and have some techo questions for the boffins if this is the right forum. Some of the the questions are basic, but I haven't seen them addressed so far in the materials I've been using. Feel free to dissuade me any of misunderstandings or false assumptions embedded..
1) How do miners decide when it's time to calculate a block? At some point each miner decides it has selected enough transactions from the mempool for a block and starts hashing. What is this point - is it based on a number of transactions or the 10-minute timer or other? Is it well-defined?
2) Each miner is computing a different block - does this affect the odds of a solution? Each miner can select a different set of transactions from the mempool so no miner is calculating the same block. When there is more than one solved block, the other nodes just choose the first received. Would the average solve time of the network be any different if all miners were computing the same block?
3) Will some miners be computing the same block some of the time by random chance? Presumably miners using the same code or a similar algorithm to select transactions from the mempool will end up trying to compute the same block some of the time.
4) How are nonces selected? The original bitcoin paper says miners "implement the proof-of-work by incrementing a nonce". In practice does mining code use an increment function from a zero or initial random value, or use say a random function or other. An issue here is that if all miners started from the same nonce (such as zero) then it probably reduces the chance of a mining fee because it increases the chance of another miner solving the same block at the same time and randomly being selected as the longest chain (assuming point 3 above)
5) Is anything clever done with selecting the nonce in order to decrease the solve time? Presumably not as that's the point of a good hash algorithm - no relation to the input. Nevertheless I wanted to double-check this point.
6) How do nodes verify transactions? We have a merkle tree hash for transactions in each block, but how are transactions verified before being added to the block? The satoshi paper seems to say that nodes maintain a "global" merkle tree for this purpose. Is this built from the plaintext transactions in all blocks and how large is this data structure currently? Does this data structure have any impact on bitcoin's scalability as the number of unique addresses grows?
7) Are bitcoin ASICs essentially specialised in performing SHA256 at minimal electrical cost?
Appreciate any comments on the above!
submitted by perryurban to CryptoTechnology [link] [comments]

So you’ve got your miner working, busy hashing away … but what is it really doing?

Posted for eternity @ https://vertcoin.easymine.online/articles/mining
Your miner is repeatedly hashing (see below for detail about a hash) a block of data, looking for a resulting output that is lower than a predetermined target. Each time this calculation is performed, one of the fields in the input data is changed, and this results in a different output. The output is not able to be determined until the work is completed – otherwise why would we bother doing the work in the first place?
Each hash takes a block header (see more below, but basically this is a 80-byte block of data). It runs this through the hashing function, and what comes out is a 32-byte output. For each, we usually represent that output in hexadecimal format, so it looks something like:
5da4bcb997a90bec188542365365d8b913af3f1eb7deaf55038cfcd04f0b11a0 
(that’s 64 hexadecimal characters – each character represents 4-bits. 64 x 4 bits = 256bit = 32 bytes)
The maximum value for our hash is:
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF 
And the lowest is:
0000000000000000000000000000000000000000000000000000000000000000 
The goal in Proof-of-Work systems is to look for a hash that is lower than a specific target, i.e. starts with a specific number of leading zeros. This target is what determines the difficulty.
As the output of the hash is indeterminate, we look to statistics and probability to estimate how much work (i.e. attempts at hashing) we need to complete to find a hash that is lower than a specific target. So, we can therefore assume that to find a hash that starts with a leading zero will take, on average, 16 hashes. To find one that will start with two leading zeros (00), we’re looking at 256 hashes. Four leading zeros (0000) will take 65,536 hashes. Eight leading zeros (00000000) takes 4,294,967,296 hashes. So on and so on, until we realize that it will take 2 ^ 256 (a number too big for me to show here) attempts at hitting our minimum hash value.
Remember – this number of hashes is just an estimate. Think of it like rolling a dice. A 16-sided dice. And then rolling it 64 times in a row. And hoping to strike a specific number of leading zeros. Sometimes it will take far less than the estimate, sometimes it will take far more. Over a long enough time period though (with our dice it may take many billions of years), the averages hold true.
Difficulty is a measure used in cryptocurrencies to simply show how much work is needed to find a specific block. A block of difficulty 1 must have a hash smaller than:
00000000FFFF0000000000000000000000000000000000000000000000000000 
A block of difficulty 1/256 (0.00390625) must have a hash lower than:
000000FFFF000000000000000000000000000000000000000000000000000000 
And a block of difficulty 256 must have a hash lower than:
0000000000FFFF00000000000000000000000000000000000000000000000000 
So the higher the difficulty, the lower the hash must be; therefore more work must be completed to find the block.
Take a recent Vertcoin block – block # 852545, difficulty 41878.60056944499. This required a hash lower than:
000000000001909c000000000000000000000000000000000000000000000000 
The achieve finding this, a single miner would need to have completed, on average 179,867,219,848,013 hashes (calculated by taking the number of hashes needed for a difficulty 1 block - 4,294,967,296 or 2 ^ 32 or 16 ^ 8 – and multiplied by the difficulty). Of course, our single miner may have found this sooner – or later – than predicted.
Cryptocurrencies alter the required difficulty on a regular basis (some like Vertcoin do it after every block, others like Bitcoin or Litecoin do it every 2016 blocks), to ensure the correct number of blocks are found per day. As the hash rate of miners increases, so does the difficulty to ensure this average time between blocks remains the same. Likewise, as hash rate decreases, the difficulty decreases.
With difficulties as high as the above example, solo-mining (mining by yourself, not in a pool) becomes a very difficult task. Assume our miner can produce 100 MH/s. Plugging in this into the numbers above, we can see it’s going to take him (on average) 1,798,673 seconds of hashing to find a hash lower than the target – that’s just short of 21 days. But, if his luck is down, it could easily take twice that long. Or, if he’s lucky, half that time.
So, assuming he hit’s the average, for his 21 days mining he has earned 25 VTC.
Lets take another look at the same miner, but this time he’s going to join a pool, where he is working with a stack of other miners looking for that elusive hash. Assume the pool he has joined does 50 GH/s – in that case he has 0.1 / 50 or 0.2% of the pool’s hash rate. So for any blocks the pool finds he should earn 0.2% of 25 VTC = 0.05 VTC. At 50 GH/s, the pool should expect to spend 3,597 seconds between finding blocks (2 ^ 32 * difficulty / hashrate). So about every hour, our miner can expect to earn 0.05 VTC. This works out to be about 1.2 VTC per day, and when we extrapolate over the estimated 21 days of solo mining above, we’re back to 25 VTC.
The beauty of pooled-mining over solo-mining is that the time between blocks, whilst they can vary, should be closer to the predicted / estimated times over a shorter time period. The same applies when comparing pools – pools with a smaller hash rate will experience a greater variance in time between blocks than a pool with a greater hash rate. But in the end, looking back over a longer period of time, earnings will be the same.
Hashes
A Hash is a cryptographic function that can take an arbitrary sized block of data and maps it to a fixed sized output. It is a one-way function – only knowing the input data can one calculate the output; the reverse action is impossible. Also, small changes to the input data usually result in significant changes to the output value.
For example, take the following string:
“the quick brown fox jumps over the lazy dog” 
If we perform a SHA256 hash of this, it results in:
05c6e08f1d9fdafa03147fcb8f82f124c76d2f70e3d989dc8aadb5e7d7450bec 
If we change a single character in the input string (in this case we will replace the ‘o’ in ‘over’ to a zero), the resulting hash becomes:
de492f861d6bb8438f65b2beb2e98ae96a8519f19c24042b171d02ff4dfecc82 
Blocks
A block is made up of a header, and at least one transaction. The first transaction in the block is called the Coinbase transaction – it is the transactions that creates new coins, and it specifies the addresses that those coins go to. The Coinbase transaction is always the first transaction in a block, and there can only be one. All other transactions included in a block are transactions that send coins from one wallet address to another.
The block header is an 80-byte block of data that is made up of the following information in this order:
  • Version – a 32-bit/4-byte integer
  • Previous Block’s SHA256d Hash – 32 bytes
  • Merkle Hash of the Transactions – 32 bytes
  • Timestamp - a 32-bit/4-byte integer the represents the time of the block in seconds past 1st January 1970 00:00 UTC
  • nBits - a 32-bit/4-byte integer that represents the maximum value of the hash of the block
  • Nonce - a 32-bit/4-byte integer
The Version of a block remains relatively static through a coin’s lifetime – most blocks will have the same version. Typically only used to introduce new features or enforce new rules – for instance Segwit adoption is enforced by encoding information into the Version field.
The Previous Blocks’ Hash is simple a doubled SHA256 hash of the last valid blocks header.
The Merkle Hash is a hash generated by chaining all of the transactions together in a hash tree – thus ensuring that once a transaction is included in a block, it cannot be changed. It becomes a permanent record in the blockchain.
Timestamp loosely represents the time the block was generated – it does not have to be exact, anywhere within an hour each way of the real time will be accepted.
nBits – this is the maximum hash that this block must have in order to be considered valid. Bitcoin encodes the maximum hash into a 4-byte value as this is more efficient and provides sufficient accuracy.
Nonce – a simple 4-byte integer value that is incremented by a miner in order to find a resulting hash that is lower than that specified by nBits.
submitted by nzsquirrell to VertcoinMining [link] [comments]

Sidechain headers on mainchain (unification of drivechains and spv proofs) | ZmnSCPxj | Sep 05 2017

ZmnSCPxj on Sep 05 2017:
Good morning all,
I have started to consider a unification of drivechains, blind merged mining, and sidechain SPV proofs to form yet another solution for sidechains.
Briefly, below are the starting assumptions:
  1. SPV proofs are a short chain of sidechain block headers. This is used to prove to the mainchain that some fund has been locked in the sidechain and the mainchain should unlock an equivalent fund to the redeemer.
  2. SPV proofs are large and even in compact form, are still large. We can instead use miner voting to control whether some mainchain fund should be unlocked. Presumably, the mainchain miners are monitoring that the sidechain is operating correctly and can know directly if a side-to-main peg is valid.
  3. To maintain mainchain's security, we should use merged mining for sidechain mining rather than have a separate set of miners for mainchain and each sidechain.
  4. A blockchain is just a singly-linked list. Genesis block is the NULL of the list. Additional blocks are added at the "front" of the singly-linked list. In Bitcoin, the Merkle tree root is the "pointer to head" and the previous block header ID is the "pointer to tail"; additional data like proof-of-work nonce, timestamp, and version bits exist but are not inherent parts of the blockchain linked list.
  5. In addition to SPV proofs, we should also support reorg proofs. Basically, a reorg proof is a longer SPV proof that shows that a previous SPV proof is invalid.

With those, I present the idea, "sidechain headers in mainchain".
Let us modify Sztorc's OP_BRIBEVERIFY to require the below SCRIPT to use:
OP_BRIBEVERIFY OP_DROP OP_DROP OP_DROP
We also require that be filled only once per mainchain block, as per the "blind" merge mining of Sztorc.
The key insight is that the and are, in fact, the sidechain header. Concatenating those data and hashing them is the block header hash. Just as additional information (like extranonce and witness commitment) are put in the mainchain coinbase transaction, any additional information that the sidechain would have wanted to put in its header can be committed to in the sidechain's equivalent of a coinbase transaction (i.e. a sidechain header transaction).
(All three pieces of data can be "merged" into a single very long data push to reduce the number of OP_DROP operations, this is a detail)
Thus, the sidechain header chain (but not the block data) is embedded in the mainchain itself.
Thus, SPV proofs do not need to present new data to the mainchain. Instead, the mainchain already embeds the SPV proof, since the headers are already in the mainchain's blocks. All that is needed to unlock a lockbox is to provide some past sidechain header hash (or possibly just a previous mainchain block that contains the sidechain header hash, to make it easier for mainchain nodes to look up) and the Merkle path to a sidechain-side side-to-main peg transaction. If the sidechain header chain is "long enough" (for example, 288 sidechain block headers) then it is presumably SPV-safe to release the funds on the mainchain side.

Suppose a sidechain is reorganized, while a side-to-main peg transaction is in the sidechain that is to be reorganized away.
Let us make our example simpler by requiring an SPV proof to be only 4 sidechain block headers.
In the example below, small letters are sidechain block headers to be reorganized, large letters are sidechain block headers that will be judged valid. The sidechain block header "Aa" is the fork point. b' is the sidechain block containing the side-to-main peg that is lost.
Remember, for each mainchain block, only a single sidechain block header for a particular sidechain ID can be added.
The numbers in this example below are mainchain block height numbers.
0: Aa
1: b'
2: c
4: C
5: d
6: D
7: E
8: F
9: G
10: H <- b' side-to-main is judged as "not valid"
Basically, in case of a sidechain fork, the mainchain considers the longest chain to be valid if it is longer by the SPV proof required length. In the above, at mainchain block 10, the sidechain H is now 4 blocks (H,G,F,E) longer than the other sidechain fork that ended at d.
Mainchain nodes can validate this rule because the sidechain headers are embedded in the mainchain block's coinbase. Thus, mainchain fullnodes can validate this part of the sidechain rule of "longest work chain".

Suppose I wish to steal funds from sidechain, by stealing the sidechain lockboxes on the mainchain. I can use the OP_BRIBEVERIFY opcode which Sztorc has graciously provided to cause miners that are otherwise uninterested in the sidechain to put random block headers on a sidechain fork. Since the mainchain nodes are not going to verify the sidechain blocks (and are unaware of sidechain block formats in detail, just the sidechain block headers), I can get away with this on the mainchain.
However, to do so, I need to pay OP_BRIBEVERIFY multiple times. If our rule is 288 sidechain blocks for an SPV proof, then I need to pay OP_BRIBEVERIFY 288 times.
This can then be used to reduce the risk of theft. If lockboxes have a limit in value, or are fixed in value, that maximum/fixed value can be made small enough that paying OP_BRIBEVERIFY 288 times is likely to be more expensive than the lockbox value.
In addition, because only one sidechain header can be put for each mainchain header, I will also need to compete with legitimate users of the sidechain. Those users may devote some of their mainchain funds to keep the sidechain alive and valid by paying OP_BRIBEVERIFY themselves. They will reject my invalid sidechain block and build from a fork point before my theft attempt.
Because the rule is that the longest sidechain must beat the second-longest chain by 288 (or however many) sidechain block headers, legitimate users of the sidechain will impede my progress to successful theft. This makes it less attractive for me to attempt to steal from the sidechain.
The effect is that legitimate users are generating reorg proofs while I try to complete my SPV proof. As the legitimate users increase their fork, I need to keep up and overtake them. This can make it unattractive for me to steal from the sidechain.
Note however that we assume here that a side-to-main peg cannot occur more often than an entire SPV proof period.

Suppose I am a major power with influence over >51% of mainchain miners. What happens if I use that influence to cause the greatest damage to the sidechain?
I can simply ask my miners to create invalid side-to-main pegs that unlock the sidechain's lockboxes. With a greater than 51% of mainchain miners, I do not need to do anything like attempt to double-spend mainchain UTXO's. Instead, I can simply ask my miners to operate correctly to mainchain rules, but violate sidechain rules and steal the sidechain's lockboxes.
With greater than 51% of mainchain miners, I can extend my invalid sidechain until we reach the minimum necessary SPV proof. Assuming a two-way race between legitimate users of the sidechain and me, since I have >51% of mainchain miners, I can build the SPV proof faster than the legitimate users can create a reorg proof against me. This is precisely the same situation that causes drivechain to fail.
An alternative is to require that miners participating in sidechains to check the sidechain in full, and to consider mainchain blocks containing invalid sidechain headers as invalid. However, this greatly increases the amount of data that a full miner needs to be able to receive and verify, effectively increasing centralization risk for the mainchain.

The central idea of drivechain is simply that miners vote on the validity of sidechain side-to-main pegs. But this is effectively the same as miners - and/or OP_BRIBEVERIFY users - only putting valid sidechain block headers on top of valid sidechain block headers. Thus, if we instead use sidechain-headers-on-mainchain, the "vote" that the sidechain side-to-main peg is valid, is the same as a valid merge-mine of the sidechain.
SPV proofs are unnecessary in drivechain. In sidechain-header-on-mainchain, SPV proofs are already embedded in the mainchain. In drivechain, we ask mainchain fullnodes to trust miners. In sidechain-header-on-mainchain, mainchain fullnodes validate SPV proofs on the mainchain, without trusting anyone and without running sidechain software.
To validate the mainchain, a mainchain node keeps a data structure for each existing sidechain's fork.
When the sidechain is first created (perhaps by some special transaction that creates the sidechain's genesis block header and/or sidechain ID, possibly with some proof-of-burn to ensure that Bitcoin users do not arbitrarily create "useless" sidechains, but still allowing permissionless creation of sidechains), the mainchain node creates that data structure.
The data structure contains:
  1. A sidechain block height, a large number initially 0 at sidechain genesis.
  2. A side-to-main peg pointer, which may be NULL, and which also includes a block height at which the side-to-main peg is.
  3. Links to other forks of the same sidechain ID, if any.
  4. The top block header hash of the sidechain (sidechain tip).
If the sidechain's block header on a mainchain block is the direct descendant of the current sidechain tip, we just update the top block header hash and increment the block height.
If there is a side-to-main peg on the sidechain block header, if the side-to-main peg pointer is NULL, we initialize it and store the block height at which the side-to-main peg exists. If there i...[message truncated here by reddit bot]...
original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-Septembe014910.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

So are folks ready to deep six SHA256(SHA256(HEADER)) PoW yet?

Miner concentration that occurs when PoW is so easily unrolled onto ASIC -- which is to say when the PoW is not memory, storage, and memory<->storage bandwidth demanding, will always lead in the end to mining concentration with the most marked advantage landing in the hands of manufacturing concerns.
It is true of course that any process that can be done in commodity hardware can be done better, faster, in specialized ASIC. But with Bitcoin we have a single easily unrolled hash function buzzing a small piece of header data with only a very tiny nonce getting incremented between operations. Tiny, tiny, tiny amounts of memory per hash instance, nil requirements in terms of storage, and zero need for high bandwidth memory <-> storage.
More and more parallel hashing instances per chip, smaller processes, and economies of scale lead to centralization especially in the hands of the manufacturers.
That centralization leads to power, and as the entirety of human history proves, power has a corrupting influence.
We see this today in the reporting today regarding remote kill functionality in Bitmain miners.
We see it more generally in folks like Bitfury who do not offer consumer level mining equipment, and would rather sell monster shipping container sized rigs to monster sized customers. (If you have to ask how much it costs you can't afford it).
Now, forgive me, but I cannot resist dropping an "I told you so" to all of the brazen UASF BIP148 supporters. I've been urging caution and getting a lot of crap for it. But imagine how a UASF chain fork would have gone if the UASF chain were not only a shorter chain at the outset, but the Antminer hashpower on that chain got remotely bricked in the middle of the process to see which fork would survive.
At any rate, I'd say its time to put a plan in place to at least be READY to roll out a new PoW for Bitcoin within an 18 to 36 month time horizon.
Its very extreme to think about, but ask yourself: Could you have ever imagined anything like the matters that are prevalent in mining these days? Pools refusing to implement broadly supported and very well tested protocol upgrades. Covert, patented, optimizations. Remote kill code in a crushingly large segment of deployed mining hardware.
Hell, the mere fact that it is possible to get 5 human beings in a single room and, among them, have an overwhelming majority of hashpower spoken for is worrisome enough.
I've seen luke-jr post about the potential need for such a change before. I would love to hear if nullc or any of the other core devs have considered the circumstances where a PoW change would need to be considered. Perhaps if there were a crypto break of SHA256 for example.
Just my thoughts at present.
submitted by Shmullus_Zimmerman to Bitcoin [link] [comments]

Information and FAQ

Hi, for everyone looking for help and support for IOTA you have come to the right place. Please read this information, the FAQ and the side bar before asking for help.

Information

IOTA

IOTA is an open-source distributed ledger protocol launched in 2015 that goes 'beyond blockchain' through its core invention of the blockless ‘Tangle’. The IOTA Tangle is a quantum-resistant Directed Acyclic Graph (DAG), whose digital currency 'iota' has a fixed money supply with zero inflationary cost.
IOTA uniquely offers zero-fee transactions & no fixed limit on how many transactions can be confirmed per second. Scaling limitations have been removed, since throughput grows in conjunction with activity; the more activity, the more transactions can be processed & the faster the network. Further, unlike blockchain architecture, IOTA has no separation between users and validators (miners / stakers); rather, validation is an intrinsic property of using the ledger, thus avoiding centralization.
IOTA is focused on being useful for the emerging machine-to-machine (m2m) economy of the Internet-of-Things (IoT), data integrity, micro-/nano- payments, and other applications where a scalable decentralized system is warranted.
More information can be found here.

Non reusable addresses

Contrary to traditional blockchain based systems such as Bitcoin, where your wallet addresses can be reused, IOTA's addresses should only be used once (for outgoing transfers). That means there is no limit to the number of transactions an address can receive, but as soon as you've used funds from that address to make a transaction, this address should not be used anymore.
The reason for this is, by making an outgoing transaction a part of the private key of that specific address is revealed, and it opens the possibility that someone may brute force the full private key to gain access to all funds on that address. The more outgoing transactions you make from the same address, the easier it will be to brute force the private key.
It should be noted that having access to the private key of an address will not reveal your seed or the private key of the other addresses within your seed / "account".
This piggy bank diagram can help visualize non reusable addresses. imgur link

Address Index

When a new address is generated it is calculated from the combination of a seed + Address Index, where the Address Index can be any positive Integer (including "0"). The wallet usually starts from Address Index 0, but it will skip any Address Index where it sees that the corresponding address has already been attached to the tangle.

Private Keys

Private keys are derived from a seeds key index. From that private key you then generate an address. The key index starting at 0, can be incremented to get a new private key, and thus address.
It is important to keep in mind that all security-sensitive functions are implemented client side. What this means is that you can generate private keys and addresses securely in the browser, or on an offline computer. All libraries provide this functionality.
IOTA uses winternitz one-time signatures, as such you should ensure that you know which private key (and which address) has already been used in order to not reuse it. Subsequently reusing private keys can lead to the loss of funds (an attacker is able to forge the signature after continuous reuse).
Exchanges are advised to store seeds, not private keys.

Double spending

Sending a transaction will move your entire balance to a completely new address, if you have more than one pending transaction only one can eventually be confirmed and the resulting balance is sent to your next wallet address. This means that the other pending transactions are now sent from an address that has a balance of 0 IOTA, and thus none of these pending transactions can ever be confirmed.

Transaction Process

As previously mentioned, in IOTA there are no miners. As such the process of making a transaction is different from any Blockchain out there today. The process in IOTA looks as follows:
  • Signing: You sign the transaction inputs with your private keys. This can be done offline.
  • Tip Selection: MCMC is used to randomly select two tips, which will be referenced by your transaction (branchTransaction and trunkTransaction)
  • Proof of Work: In order to have your transaction accepted by the network, you need to do some Proof of Work - similar to Hashcash, not Bitcoin (spam and sybil-resistance). This usually takes a few minutes on a modern pc.
After this is completed, the trunkTransaction, branchTransaction and nonce of the transaction object should be updated. This means that you can broadcast the transaction to the network now and wait for it to be approved by someone else.

FAQ

How do I to buy IOTA?

Currently not all exchanges support IOTA and those that do may not support the option to buy with fiat currencies.
One way to buy IOTA is to buy with bitcoin (BTC) or Ether (ETH), first you will need to deposit BTC/ETH onto an exchange wallet and you can the exchange them for IOTA.
You can buy BTC or ETH through coinbase. And exchange those for IOTA on Binance or Bitfinex (other exchanges do exist, some linked in the side bar).
A detailed guide to buying can be found here.

What is MIOTA?

MIOTA is a unit of IOTA, 1 Mega IOTA or 1 Mi. It is equivalent to 1,000,000 IOTA and is the unit which is currently exchanged.
We can use the metric prefixes when describing IOTA e.g 2,500,000,000 i is equivalent to 2.5 Gi.
Note: some exchanges will display IOTA when they mean MIOTA.

Can I mine IOTA?

No you can not mine IOTA, all the supply of IOTA exist now and no more can be made.
If you want to send IOTA, your 'fee' is you have to verify 2 other transactions, thereby acting like a minenode.

Where should I store IOTA?

It is not recommended to store large amounts of IOTA on the exchange as you will not have access to the private keys of the addresses generated.
However many people have faced problems with the current GUI Wallet and therefore group consensus at the moment is to store your IOTA on the exchange, until the release of the UCL Wallet, or the Paper Wallet.

What is the GUI wallet?

What is the UCL Wallet?

What is a seed?

A seed is a unique identifier that can be described as a combined username and password that grants you access to your wallet.
Your seed is used to generate the addresses linked to your account and so this should be kept private and not shared with anyone. If anyone obtains your seed, they can login and access your IOTA.

How do I generate a seed?

You must generate a random 81 character seed using only A-Z and the number 9.
It is recommended to use offline methods to generate a seed, and not recommended to use any non community verified techniques. To generate a seed you could:

On a Linux Terminal use the following command:

 cat /dev/urandom |tr -dc A-Z9|head -c${1:-81} 

On a Mac Terminal use the following command:

 cat /dev/urandom |LC_ALL=C tr -dc 'A-Z9' | fold -w 81 | head -n 1 

With KeePass on PC

A helpful guide for generating a secure seed on KeePass can be found here.

With a dice

Dice roll template

Is my seed secure?

  1. All seeds should be 81 characters in random order composed of A-Z and 9.
  2. Do not give your seed to anyone, and don’t keep it saved in a plain text document.
  3. Don’t input your seed into any websites that you don’t trust.
Is this safe? Can’t anyone guess my seed?
What are the odds of someone guessing your seed?
  • IOTA seed = 81 characters long, and you can use A-Z, 9
  • Giving 2781 = 8.7x10115 possible combinations for IOTA seeds
  • Now let's say you have a "super computer" letting you generate and read every address associated with 1 trillion different seeds per second.
  • 8.7x10115 seeds / 1x1012 generated per second = 8.7x10103 seconds = 2.8x1096 years to process all IOTA seeds.

Why does balance appear to be 0 after a snapshot?

When a snapshot happens, all transactions are being deleted from the Tangle, leaving only the record of how many IOTA are owned by each address. However, the next time the wallet scans the Tangle to look for used addresses, the transactions will be gone because of the snapshot and the wallet will not know anymore that an address belongs to it. This is the reason for the need to regenerate addresses, so that the wallet can check the balance of each address. The more transactions were made before a snapshot, the further away the balance moves from address index 0 and the more addresses have to be (re-) generated after the snapshot.

Why is my transaction pending?

IOTA's current Tangle implementation (IOTA is in constant development, so this may change in the future) has a confirmation rate that is ~66% at first attempt.
So, if a transaction does not confirm within 1 hour, it is necessary to "reattach" (also known as "replay") the transaction one time. Doing so one time increases probability of confirmation from ~66% to ~89%.
Repeating the process a second time increases the probability from ~89% to ~99.9%.

What does attach to the tangle mean?

The process of making an transaction can be divided into two main steps:
  1. The local signing of a transaction, for which your seed is required.
  2. Taking the prepared transaction data, choosing two transactions from the tangle and doing the POW. This step is also called “attaching”.
The following analogy makes it easier to understand:
Step one is like writing a letter. You take a piece of paper, write some information on it, sign it at the bottom with your signature to authenticate that it was indeed you who wrote it, put it in an envelope and then write the recipient's address on it.
Step two: In order to attach our “letter” (transaction), we go to the tangle, pick randomly two of the newest “letters” and tie a connection between our “letter” and each of the “letters” we choose to reference.
The “Attach address” function in the wallet is actually doing nothing else than making an 0 value transaction to the address that is being attached.

How do I reattach a transaction.

Reattaching a transaction is different depending on where you send your transaction from. To reattach using the GUI Desktop wallet follow these steps:
  1. Click 'History'.
  2. Click 'Show Bundle' on the 'pending' transaction.
  3. Click 'Reattach'.
  4. Click 'Rebroadcast'. (optional, usually not required)
  5. Wait 1 Hour.
  6. If still 'pending', repeat steps 1-5 once more.

What happens to pending transactions after a snapshot?

How do I recover from a long term pending transaction?

How can I support IOTA?

You can support the IOTA network by setting up a Full Node, this will help secure the network by validating transactions broadcast by other nodes.
Running a full node also means you don't have to trust a 3rd party in showing you the correct balance and transaction history of your wallet.
By running a full node you get to take advantage of new features that might not be installed on 3rd party nodes.

How to set up a full node?

To set up a full node you will need to follow these steps:
  1. Download the full node software: either GUI, or headless CLI for lower system requirements and better performance.
  2. Get a static IP for your node.
  3. Join the network by adding 7-9 neighbours.
  4. Keep your full node up and running as much as possible.
A detailed user guide on how to set up a VTS IOTA Full Node from scratch can be found here.

How do I get a static IP?

To learn how to setup a hostname (~static IP) so you can use the newest IOTA versions that have no automated peer discovery please follow this guide.

How do I find a neighbour?

Are you a single IOTA full node looking for a partner? You can look for partners in these place:

Extras

Transaction Example:

Multiple Address in 1 Wallet Explained:

submitted by Boltzmanns_Constant to IOTASupport [link] [comments]

The missing explanation of Proof of Stake Version 3 - Article by earlz.net

The missing explanation of Proof of Stake Version 3

In every cryptocurrency there must be some consensus mechanism which keeps the entire distributed network in sync. When Bitcoin first came out, it introduced the Proof of Work (PoW) system. PoW is done by cryptographically hashing a piece of data (the block header) over and over. Because of how one-way hashing works. One tiny change in the data can cause an extremely different hash to come of it. Participants in the network determine if the PoW is valid complete by judging if the final hash meets a certain condition, called difficulty. The difficulty is an ever changing "target" which the hash must meet or exceed. Whenever the network is creating more blocks than scheduled, this target is changed automatically by the network so that the target becomes more and more difficult to meet. And thus, requires more and more computing power to find a hash that matches the target within the target time of 10 minutes.

Definitions

Some basic definitions might be unfamiliar to some people not familiar with the blockchain code, these are:

Proof of Work and Blockchain Consensus Systems

Proof of Work is a proven consensus mechanism that has made Bitcoin secure and trustworthy for 8 years now. However, it is not without it's fair share of problems. PoW's major drawbacks are:
  1. PoW wastes a lot of electricity, harming the environment.
  2. PoW benefits greatly from economies of scale, so it tends to benefit big players the most, rather than small participants in the network.
  3. PoW provides no incentive to use or keep the tokens.
  4. PoW has some centralization risks, because it tends to encourage miners to participate in the biggest mining pool (a group of miners who share the block reward), thus the biggest mining pool operator holds a lot of control over the network.
Proof of Stake was invented to solve many of these problems by allowing participants to create and mine new blocks (and thus also get a block reward), simply by holding onto coins in their wallet and allowing their wallet to do automatic "staking". Proof Of Stake was originally invented by Sunny King and implemented in Peercoin. It has since been improved and adapted by many other people. This includes "Proof of Stake Version 2" by Pavel Vasin, "Proof of Stake Velocity" by Larry Ren, and most recently CASPER by Vlad Zamfir, as well as countless other experiments and lesser known projects.
For Qtum we have decided to build upon "Proof of Stake Version 3", an improvement over version 2 that was also made by Pavel Vasin and implemented in the Blackcoin project. This version of PoS as implemented in Blackcoin is what we will be describing here. Some minor details of it has been modified in Qtum, but the core consensus model is identical.
For many community members and developers alike, proof of stake is a difficult topic, because there has been very little written on how it manages to accomplish keeping the network safe using only proof of ownership of tokens on the network. This blog post will go into fine detail about Proof of Stake Version 3 and how it's blocks are created, validated, and ultimately how a pure Proof of Stake blockchain is possible to secure. This will assume some technical knowledge, but I will try to explain things so that most of the knowledge can be gathered from context. You should at least be familiar with the concept of the UTXO-based blockchain.
Before we talk about PoS, it helps to understand how the much simpler PoW consensus mechanism works. It's mining process can be described in just a few lines of pseudo-code:
while(blockhash > difficulty) { block.nonce = block.nonce + 1 blockhash = sha256(sha256(block)) } 
A hash is a cryptographic algorithm which takes an arbritrary amount of input data, does a lot of processing of it, and outputs a fixed-size "digest" of that data. It is impossible to figure out the input data with just the digest. So, PoW tends to function like a lottery, where you find out if you won by creating the hash and checking it against the target, and you create another ticket by changing some piece of data in the block. In Bitcoin's case, nonce is used for this, as well as some other fields (usually called "extraNonce"). Once a blockhash is found which is less than the difficulty target, the block is valid, and can be broadcast to the rest of the distributed network. Miners will then see it and start building the next block on top of this block.

Proof of Stake's Protocol Structures and Rules

Now enter Proof of Stake. We have these goals for PoS:
  1. Impossible to counterfeit a block
  2. Big players do not get disproportionally bigger rewards
  3. More computing power is not useful for creating blocks
  4. No one member of the network can control the entire blockchain
The core concept of PoS is very similar to PoW, a lottery. However, the big difference is that it is not possible to "get more tickets" to the lottery by simply changing some data in the block. Instead of the "block hash" being the lottery ticket to judge against a target, PoS invents the notion of a "kernel hash".
The kernel hash is composed of several pieces of data that are not readily modifiable in the current block. And so, because the miners do not have an easy way to modify the kernel hash, they can not simply iterate through a large amount of hashes like in PoW.
Proof of Stake blocks add many additional consensus rules in order to realize it's goals. First, unlike in PoW, the coinbase transaction (the first transaction in the block) must be empty and reward 0 tokens. Instead, to reward stakers, there is a special "stake transaction" which must be the 2nd transaction in the block. A stake transaction is defined as any transaction that:
  1. Has at least 1 valid vin
  2. It's first vout must be an empty script
  3. It's second vout must not be empty
Furthermore, staking transactions must abide by these rules to be valid in a block:
  1. The second vout must be either a pubkey (not pubkeyhash!) script, or an OP_RETURN script that is unspendable (data-only) but stores data for a public key
  2. The timestamp in the transaction must be equal to the block timestamp
  3. the total output value of a stake transaction must be less than or equal to the total inputs plus the PoS block reward plus the block's total transaction fees. output <= (input + block_reward + tx_fees)
  4. The first spent vin's output must be confirmed by at least 500 blocks (in otherwords, the coins being spent must be at least 500 blocks old)
  5. Though more vins can used and spent in a staking transaction, the first vin is the only one used for consensus parameters.
These rules ensure that the stake transaction is easy to identify, and ensures that it gives enough info to the blockchain to validate the block. The empty vout method is not the only way staking transactions could have been identified, but this was the original design from Sunny King and has worked well enough.
Now that we understand what a staking transaction is, and what rules they must abide by, the next piece is to cover the rules for PoS blocks:
There are a lot of details here that we'll cover in a bit. The most important part that really makes PoS effective lies in the "kernel hash". The kernel hash is used similar to PoW (if hash meets difficulty, then block is valid). However, the kernel hash is not directly modifiable in the context of the current block. We will first cover exactly what goes into these structures and mechanisms, and later explain why this design is exactly this way, and what unexpected consequences can come from minor changes to it.

The Proof of Stake Kernel Hash

The kernel hash specifically consists of the following exact pieces of data (in order):
The stake modifier of a block is a hash of exactly:
The only way to change the current kernel hash (in order to mine a block), is thus to either change your "prevout", or to change the current block time.
A single wallet typically contains many UTXOs. The balance of the wallet is basically the total amount of all the UTXOs that can be spent by the wallet. This is of course the same in a PoS wallet. This is important though, because any output can be used for staking. One of these outputs are what can become the "prevout" in a staking transaction to form a valid PoS block.
Finally, there is one more aspect that is changed in the mining process of a PoS block. The difficulty is weighted against the number of coins in the staking transaction. The PoS difficulty ends up being twice as easy to achieve when staking 2 coins, compared to staking just 1 coin. If this were not the case, then it would encourage creating many tiny UTXOs for staking, which would bloat the size of the blockchain and ultimately cause the entire network to require more resources to maintain, as well as potentially compromise the blockchain's overall security.
So, if we were to show some pseudo-code for finding a valid kernel hash now, it would look like:
while(true){ foreach(utxo in wallet){ blockTime = currentTime - currentTime % 16 posDifficulty = difficulty * utxo.value hash = hash(previousStakeModifier << utxo.time << utxo.hash << utxo.n << blockTime) if(hash < posDifficulty){ done } } wait 16s -- wait 16 seconds, until the block time can be changed } 
This code isn't so easy to understand as our PoW example, so I'll attempt to explain it in plain english:
Do the following over and over for infinity: Calculate the blockTime to be the current time minus itself modulus 16 (modulus is like dividing by 16, but then only instead of taking the result, taking the remainder) Calculate the posDifficulty as the network difficulty, multiplied by the number of coins held by the UTXO. Cycle through each UTXO in the wallet. With each UTXO, calculate a SHA256 hash using the previous block's stake modifier, as well as some data from the the UTXO, and finally the blockTime. Compare this hash to the posDifficulty. If the hash is less than the posDifficulty, then the kernel hash is valid and you can create a new block. After going through all UTXOs, if no hash produced is less than the posDifficulty, then wait 16 seconds and do it all over again. 
Now that we have found a valid kernel hash using one of the UTXOs we can spend, we can create a staking transaction. This staking transaction will have 1 vin, which spends the UTXO we found that has a valid kernel hash. It will have (at least) 2 vouts. The first vout will be empty, identifying to the blockchain that it is a staking transaction. The second vout will either contain an OP_RETURN data transaction that contains a single public key, or it will contain a pay-to-pubkey script. The latter is usually used for simplicity, but using a data transaction for this allows for some advanced use cases (such as a separate block signing machine) without needlessly cluttering the UTXO set.
Finally, any transactions from the mempool are added to the block. The only thing left to do now is to create a signature, proving that we have approved the otherwise valid PoS block. The signature must use the public key that is encoded (either as pay-pubkey script, or as a data OP_RETURN script) in the second vout of the staking transaction. The actual data signed in the block hash. After the signature is applied, the block can be broadcast to the network. Nodes in the network will then validate the block and if it finds it valid and there is no better blockchain then it will accept it into it's own blockchain and broadcast the block to all the nodes it has connection to.
Now we have a fully functional and secure PoSv3 blockchain. PoSv3 is what we determined to be most resistant to attack while maintaining a pure decentralized consensus system (ie, without master nodes or currators). To understand why we approached this conclusion however, we must understand it's history.

PoSv3's History

Proof of Stake has a fairly long history. I won't cover every detail, but cover broadly what was changed between each version to arrive at PoSv3 for historical purposes:
PoSv1 - This version is implemented in Peercoin. It relied heavily on the notion of "coin age", or how long a UTXO has not been spent on the blockchain. It's implementation would basically make it so that the higher the coin age, the more the difficulty is reduced. This had the bad side-effect however of encouraging people to only open their wallet every month or longer for staking. Assuming the coins were all relatively old, they would almost instantaneously produce new staking blocks. This however makes double-spend attacks extremely easy to execute. Peercoin itself is not affected by this because it is a hybrid PoW and PoS blockchain, so the PoW blocks mitigated this effect.
PoSv2 - This version removes coin age completely from consensus, as well as using a completely different stake modifier mechanism from v1. The number of changes are too numerous to list here. All of this was done to remove coin age from consensus and make it a safe consensus mechanism without requiring a PoW/PoS hybrid blockchain to mitigate various attacks.
PoSv3 - PoSv3 is really more of an incremental improvement over PoSv2. In PoSv2 the stake modifier also included the previous block time. This was removed to prevent a "short-range" attack where it was possible to iteratively mine an alternative blockchain by iterating through previous block times. PoSv2 used block and transaction times to determine the age of a UTXO; this is not the same as coin age, but rather is the "minimum confirmations required" before a UTXO can be used for staking. This was changed to a much simpler mechanism where the age of a UTXO is determined by it's depth in the blockchain. This thus doesn't incentivize inaccurate timestamps to be used on the blockchain, and is also more immune to "timewarp" attacks. PoSv3 also added support for OP_RETURN coinstake transactions which allows for a vout to contain the public key for signing the block without requiring a full pay-to-pubkey script.

References:

  1. https://peercoin.net/assets/papepeercoin-paper.pdf
  2. https://blackcoin.co/blackcoin-pos-protocol-v2-whitepaper.pdf
  3. https://www.reddcoin.com/papers/PoSV.pdf
  4. https://blog.ethereum.org/2015/08/01/introducing-casper-friendly-ghost/
  5. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/kernel.h#L11
  6. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.cpp#L2032
  7. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.h#L279
  8. http://earlz.net/view/2017/07/27/1820/what-is-a-utxo-and-how-does-it
  9. https://en.bitcoin.it/wiki/Script#Obsolete_pay-to-pubkey_transaction
  10. https://en.bitcoin.it/wiki/Script#Standard_Transaction_to_Bitcoin_address_.28pay-to-pubkey-hash.29
  11. https://en.bitcoin.it/wiki/Script#Provably_Unspendable.2FPrunable_Outputs
Article by earlz.net
http://earlz.net/view/2017/07/27/1904/the-missing-explanation-of-proof-of-stake-version
submitted by B3TeC to Moin [link] [comments]

Proof-of-key blockchain

Hello everyone. I've been thinking about a light alternative proof-of-(work/stake) algorithm for blockchains that doesn't imply hardware/electricy race. I'd like to request for your comments about it.
The reason why such exponential investment is made into this hardware/energy is because it is proportional to the chances of winning the proof-of-* race. The proposed algorithm to avoid such a race is to determine the winner before the race starts, with almost zero CPU power needed to discover its identity.
Let's consider that an arbitrary amount of coins have been pre-mined and sold to fund the development (e.g. 5%). In order to get a chance to be rewarded with newly mined coins and fees for discovering a new block, a node needs to have one or more reward-keypair(s). Such reward-keys can only be buyed/registered on the blockchain, its price must be set to at least the current number of coins rewarded for discovering a new block, let's say 50 coins for the first years, like for Bitcoin (1).
Buying/registering a new reward-key on the blockchain is like buying new rig hardware, the more you have on your node, the more you increase your chances to win the race (2). For every node to unanimously agree on the winner, they all need to work on the very same block of transactions, I explain later how I think this goal can be achieved. Then, a simple checksum hash needs to be computed by every nodes on the new block, it is made of the previous head block's nonce appended by the ordered sum of outgoing transactions' addresses (3), it must have the same length in bits as the pubic-reward-keys (e.g. 256 bits), the public-reward-key that is the closest to this hash is the winner (nearest neighbor matching like with LSH). The node that happens to be the winner (who owns the corresponding private-reward-key) has to claim the block by signing the totality of its data (block's head index on the chain, ordered transactions in full, plus its reward transaction) and broadcast its claim for other nodes to validate and add it to their blockchain's head (4). If the block is not claimed, it can be for multiple reasons, blockchain fork (nodes not working on the very same block of transactions because of accidental or malicious cacophony), network latency, or simply the wining node being down. But I think these cases can be dealt with securely, as explained below.
In order to be sure that every node of the network is working on the very same block of transactions at the very same time, some rigorous synchronization has to be set up, with carrot and stick for the participating nodes. First thing is to prevent transactions from being constantly broadcast, otherwise because of propagation delay the data of the new block would always be in an inconsistent state among the different nodes. As the delay for having data propagated to 99% of a P2P network (Bitcoin) appears to be about 40 seconds (4), I propose an arbitrary "pulse window" of 20 seconds for nodes to initiate the broadcast of their transactions (they need to synchronize at startup via NTP), followed by 40 seconds of retention of new transactions (meanwhile new transactions are being queued in each node, waiting for the next pulse), for letting the time of all the transactions to reach the totality of the network. So, there is one broadcast pulse every minute (20+40), as well as one new block. If any node do not play the game (wrongdoing, miss-configuration, bad QoS, etc.) that triggers cacophony, the network will have to identify and ban them (5) at the next pulse. On the other hand, nodes that provide good synchronization, QoS, etc. will be rewarded by receiving a part of the fees of the transactions that they have initially broadcast. To do so, transactions and their entry node need to identify each other reciprocally. Each transaction identifies the entry node chosen for broadcast, as well as the node signs the transaction (or preferably a whole transactions' batch in a single network packet). Node identification is done via one of its reward-key(s).
If some transactions are sent too late, not reaching the totality (99.9%) of the network (likely to be initially broadcast around the 55th second, just before the end of the 20+40 seconds pulse (4), instead of the dedicated initial 20 seconds pulse window, because of intentional cacophony malice, or miss-configuration, bad QoS being more unlikely for such a long lag), then the blockchain's working head will be forked into multiple heads. Therefore, the probability of finding the next block will be divided by the number of different forked heads (proportionally to the respective number of nodes working on each forked head). Let's take an arbitrary case scenario where the blockchain gets forked into 3 equally distributed heads, each representing 33.3% of the nodes, the respective chances to find each of these 3 different forked blocks is divided by 3 (for each forked head block there is a 66.6% chance that the winning reward-key is working on another block, therefore won't claim it). Thus, after 2 or 3 iteration pulses (or even only one), the entirety of the network will find the block discovery/validation rate dramatically drop, which will trigger nodes to enter "cacophony mode", stopping to emit transactions, and broadcast the blocks they are working on after the cacophony was detected (and maybe one or two blocks before that as an uncertainty margin), as well as the signature of the block's hash by the node (6). After few seconds/minutes, all the nodes will have gathered a reference copy of all different versions of blocks being worked on, along with the number of times they have been signed (a.k.a in which proportions a specific version of a block were spread amongst the network). All nodes now have an accurate snapshot of the total topology and consistency the network, few blocks backward from the blockchain's head, before the fork happened. Then nodes can independently compare blocks, whitelisting every nodes that had their transactions registered on every blocks (meaning they were broadcast on time), baning those that are on some blocks but not other popular ones (7), therefore the network self-heals by purging bad nodes, and resume mining by rolling back to the last block that was mined before the cacophony started.
In the case of a node suspecting cacophony because being in the fringe of the network or out-of-sync (thus not receiving transactions on proper time), other nodes won't be in "cacophony mode", so the node will find itself lonely by not receiving any/enough different block versions (along with their signed hashes), therefore it will know that there is no cacophony, but bad QoS or configuration, this will need to be fixed by resync NTP, re-configure, change peers, sys-admin intervention, etc. They'll have to catch up quickly not to miss the race/reward.
In the case of a block not being claimed because of the winner node being down, the network would enter in "cacophony mode" as well, but figure out that it is consistent, therefore simply blacklisting the winning public-reward-key of the unclaimed block, until it gets unlocked by a dedicated "unlock message", signed with its corresponding private-reward-key when the node gets back online.
There might be plenty of smallebigger flaws that I did not think about, I'd like to request for your help in identifying and hopefully fixing them. I've been thinking that rich wrongdoers could escape the carrot and stick policy constraint by buying reward-keys with the only goal to prevent the network from taking off, provoking endless cacophony. I think this can be fixed by adjusting the price of the reward-keys over time (1), or even using a non-mandatory collaborative blacklist system for the early stage of network growth, until the price of reward-keys becomes dissuasive for a performing real prejudicial sabotage, even for rich wrongdoers. Also, because there is no CPU constraint for calculating blocks, it would be easy for anyone to forge a longer chain, however I'm not sure that the longer chain policy is the best here, and such forged chains could be easily detected because of a too much redundant winners' identity (not representative of the global reward-key pool), and not to mention that it cannot be broadcast as nodes do not get new blocks from the network but calculate them internally.
What do you think?
Thanks,
Camille.
(1) Price for buying/registering a new reward-key cannot be lower to the number of coins rewarded for finding a block to prevent their number to be exponential, but it could/should be higher to prevent rich wrongdoers to buy many and use them to disturb the network, it could also maintain the size of the network to a consistent state. Here we take the example of 50 coins per reward-key, which means one every minute, one every few hours sounds more reasonable and manageable, but this is outside of the scope of this post.
(2) A special transaction has to be done for purchasing a reward-key, unlike when simply spending coins with outgoing/incoming wallet address, here you send your self-generated public-reward-key (needless to say while keeping the private key private) along with your 50 coins, in return the network makes the 50 coins available again to miners as a reward for the next block discovery, and register your public-reward-key on the blockchain. The reverse operation to destroy the reward-key for getting the 50 coins reimbursed should be possible, as well a replacing a reward-key by a new one if suspected by the owner of being corrupted/stolen. The 50 coins given when finding a new block (or being reimbursed) are made available again from a previous purchase(s), or newly created if this coin reserve is empty. The available monetary mass may inflate or shrink depending of the market demand for reward-keys (mining) or liquidity, this policy can be discussed and algorithmically adjusted/limited in the specs (e.g. coins made available again after buying rewards-keys cannot represent more than 10% of the minted coins).
(3) We use outgoing transaction's addresses because they cannot be forged on-the-fly to alter the resulting hash. If we use the full transaction for calculating the "winning hash", nodes could try to forge and inject one transaction at the last second, playing with decimals to get the closest result to one of their public-reward-key, which would incite again for a hardware/electricity race.
(4) http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf
(5) Quarantine duration should be incremental for each ban, e.g.: 3h, 12h, 72h, 2 weeks, 4 months, one year, etc.
(6) Any node signing more than one different block for the same head number will be banned (5) and its data ignored.
(7) In "cacophony mode" marginal blocks that are not widespread and lacking transactions number should be ignored, they are more likely to be on the fringe of the network, not having received some transactions on time because of QoS-like issues.
submitted by mammique to crypto [link] [comments]

[Informational] [CC0] Hashcash is King

Hashcash

A core building block of the Bitcoin protocol is the Hashcash concept. Bitcoin uses Hashcash to provide security from malicious alterations of the Blockchain, by imposing a cost for alteration that a miner must hope to recoup through rewards given for cooperation.
Hashcash is basically a way to publicly prove that energy was spent on solving an arbitrary solution, using a hashing algorithm. Hashing algorithms are cryptographic programs that take a set of data as an input and produce a one way hash signature version of that data as an output. In Hashcash a problem is designed where the target hash value is known but very difficult to derive, and very easy to verify
In Bitcoin the difficulty of the Hashcash problem is varied over time depending on the recent history of solution times, targeting a ten minute solution on average.

Creation of Hashcash

The Hashcash concept was first proposed by cryptographer Adam Back in 1997 as a part of his work providing an anonymous email service to be used to promote free speech. Back came up with the Hashcash concept as a way to help prevent abuse by giving it an easily verifiable cost.
Adam Back realized that the concept could be useful beyond his own service, and worked to promote the concept to be used for other email services to prevent spam, or for other situations where Sybil resistant rate limiting could be useful. Over time Hashcash became widely known as an innovative idea, and SpamAssassin, Hotmail, Outlook, and I2P all included versions of the concept in their software.
Hashcash was also seen by crypto-anarchists as having the potential to be used in a decentralized money application that was long proposed, by Wei Dai and others, but never realized. Hal Finney made Hashcash a key component of his proposed digital currency, in a process that was very close to what would eventually become Bitcoin.

Hashing Algorithm

Hashcash as originally proposed used the SHA1 hashing algorithm, but many hashing algorithms are suitable for the concept. In Bitcoin's implement of Hashcash, Satoshi Nakamoto opted to use the newer SHA256 algorithm instead of SHA1. At that time SHA1 had been shown to have a small design flaw in which the difficulty of creating multiple identical results was underestimated by a large degree. Bitcoin actually uses two cycles of SHA256 hashing instead of the standard single hash, this is thought to be because that would have been a way to reduce the impact of the design flaw in SHA1, had it been selected.
Since the time that SHA256 was chosen as Bitcoin's hashing algorithm, a newer version of the algorithm called SHA3 has been designed to directly address the issue found in SHA1 and to be used as a comprehensive upgrade for SHA256. It's possible at some point Bitcoin would upgrade to SHA3, however the benefits from the upgrade would be minimal. Even SHA1 would still work for Bitcoin's purposes had it been selected. The difficulties involved in modifying the network consensus to support SHA3 and limited benefit means that SHA256 may never be replaced.

ASIC Resistance

After Bitcoin started to rise in value, it started to become understood that the most efficient way to produce the SHA256 hashes would be using equipment that was specially designed to hash very quickly and efficiently, called ASICs. Since a widely distributed hashing network is seen as desirable for Bitcoin and many existing miners resented the imminent arrival of a new much more efficient competitor, it was proposed that the Bitcoin hashing algorithm be modified to one that would prevent the formation of extremely expensive but vastly more powerful ASICs.
Many in the Bitcoin community turned to the popular scrypt hashing algorithm, which was designed to perform key stretching on passwords to prevent brute forcing. Unfortunately, key problems with this plan emerged. The scrypt algorithm is not designed to be fast, it is designed to be slow, meaning verification of the proof of work is also slow, unlike SHA which is very fast.
A major reason scrypt was chosen was due to the perception that ASICs could not be produced for algorithms with high memory requirements. But in actuality scrypt does not fully require its specified memory target, it's simply the most optimal solution. Because of this, an ASIC could be designed with low memory use that simply brute forces its way past the memory requirement. Eventually alternative coins were created to put these theories to the test. ASICs were in fact created for the scrypt proof of work, and the alternative concept of using scrypt failed to gain mass appeal.
Over time it became more well understood that no known algorithm can be constructed to be truly resistant to ASIC creation. Given that constraint, the prevailing wisdom is that an optimal end-game for Bitcoin hashing would be to commoditize ASICs as much as possible, in order to gain the desired distributed hashing network that is seen as the ideal.

Hashing in Bitcoin

When performing a Hashcash challenge, all hashers may be essentially trying to find the same solution. To avoid all hashers starting at the same places, Bitcoin's hashing algorithm uses the reward Bitcoin address as a randomizer: every miner essentially starts mining at his own address which is a random number.
Bitcoin hashing also includes a counter to increment as hashes are attempted. This counter may be frequently reset as large numbers of hashes are tried, it resets against a random number called an extraNonce to create a new results space to search. The counter should also be reset after successfully finding a block.
Satoshi Nakamoto's own blocks are linked together because he did not reset his counter, allowing easy correlation. The counters are also reset to obscure the number of times the individual miner had to iterate to find the solution, to hide his hashing power.
submitted by pb1x to writingforbitcoin [link] [comments]

[brainstorming bitcoin scaling] Multiple Czars per Epoch: Is there some way we could better exploit miners' massive petahashes of processing power to find some approaches to massive scaling solutions?

TL;DR: During each 10-minute period, instead of appending a SINGLE block, append MULTIPLE mutually compatible ie non-overlapping blocks (eg, use IBLT to quickly and cheaply prove that the intersection of the sets of UTXOs being used in all these blocks is EMPTY).
Czar for an Epoch
The Bitcoin protocol involves solving an SHA hashing puzzle at the current mining difficulty to select one "czar" who gets to append their current block to the chain during the current "epoch".[1]
[1] This suggestive terminology of "czar" and "epoch" comes from the Cornell Bitcoin researchers who recently proposed Bitcoin-NG, where instead of electing a czar-cum-block for the current epoch the network would elect a czar-sans-block for the current epoch. This would drastically reduce the amount of network traffic for the election - but would also require "trusting" that czar in various ways (that he won't double-spend in the block he reveals now after his election, or that he won't become the target for a DDoS).
Architecturally, it seems that the most obvious bottlenecks in the existing architecture are this single czar and the single block they append to the chain.
What if we could figure out a way to append more blocks faster to the chain, while maintaining its structure?
What if we tried using something like IBLT to elect multiple czars per epoch?
Here's an approach I've been brainstorming, which I know might be totally crazy.
Hopefully some of the experts out there on stuff like IBLT (Inverted Bloom Lookup Tables) and related stuff could weigh in.
What if we elected multiple czars during an epoch - where each czar is incentivized to locally do whatever work they can in order to attempt to minimize the "overlap" (ie, the intersection) of their block (ie, the UTXOs in their block) with any other other blocks being submitted by other "czars" for this "epoch"?
This might work as follows:
  • Use a Bloom Filter / IBLT to check that the intersection of two sets of UTXOs is empty.
  • This check almost never gives a false-positive, and never gives a false-negative;
  • Every epoch, in addition to the "SHA minimum-length zero-prefix hash lottery" we would also have an "IBLT maximal-non-intersecting-UTXOs hash lottery" (after the normal lottery), to elect multiple czars (each submitting a block) per epoch / 10-minute period - ie, the "multiple czars for this epoch" would be: all miners who submit a block where their block is mutually disjoint from all other blocks (in terms of UTXOs used), so all these non-intersecting blocks would get appended to the current chain (and the append order shouldn't matter, if there's also no intersection among the receiving addresses =).
https://en.wikipedia.org/wiki/Bloom_filter#The_union_and_intersection_of_sets
The current lone winner: the "SHA longest-zero-prefix lottery" block
Basically, the block which currently wins the lottery could still win the lottery (this is what I was calling the "SHA minimum-length zero-prefix" lottery above) - because it has so many zeros at the front of its SHA hash. Such an "SHA longest-zero-prefix lottery block" could indeed contains UTXOs which conflict with other blocks - but it would override all those other blocks, and be the only "SHA longest-zero-prefix lottery block" appended to the chain for the current epoch.
The additional new winners: multiple "IBLT biggest-non-intersecting BLOCKS" (PLURAL)
Now there could also be a bunch of other blocks (which were not the unique block winning the above SHA lottery - indeed, they might not have to do any SHA hashing at all), for which it has been proven that no other miner is submitting blocks using these same UTXOs (using IBLT to quickly and inexpensively - with low bandwidth - prove this property of non-intersection with the other blocks).
So theoretically many blocks (from many czars) could be appended during an epoch - vastly scaling the system.
Weird beneficial side-effects?
(1) "Mine your own sales"
If you're Starbucks (or some other retailer who wants to use zero-conf) you could set up a system where your customers could submit their transactions directly to you - and then you mine them yourself.
In other words, your customers wouldn't have to even broadcast the transaction from their smartphone - they could just use some kind of near-field communication to transmit the signed transaction to you the vendor, and you the vendor would then broadcast all these transactions to the network - using your better connectivity, where you would normally be 100% certain that nobody else was broadcasting blocks to the network using the same UTXOs - an assumption that would be strengthened if people's smartphone wallet software generally came from reliable sources such as the Google and Apple app stores - and if we as a community discourage programmers from releasing apps which support double-spending =).
This would have the immense benefit of allowing the Starbucks Mining Pool to guarantee that its batch / block of transactions has zero intersection (is mutually disjoint) with all other blocks being mined for that period.
It would also significantly decentralize mining, and align the interests of miners and vendors (since in many cases, a vendor would also want to be a miner - under the slogan "mine your own sales").
(2) "Mine locally, append globally"
If you're on one side of the Great Firewall of China, you could give more preference mining the transactions that are "closest" to you, and give less preference to mining the transactions that are "farthest" from you (in terms of network latency).
This would impose a kind of natural "geo-sharding" on the network, where miners prefer mining the transactions which are "closest" to them.
(3) "Scale naturally"
The throughput of the overall Bitcoin network could probably "scale" very naturally. It might not even matter if we kept the 1 MB block size limit - the system could simply scale by supporting the appending of more and more of these 1 MB blocks faster and faster per 10-minute epoch - as long as the total set of blocks to be appended during the epoch all have mutually disjoint (non-intersecting) sets of UTXOs.
(4) "No IBLT false-negatives means no accidental IBLT double-spends"
IBLTs are probabilistic - ie, they do not provide a 100% safe or guaranteed algorithm for determining if the intersection of two sets contains an element, or is empty.
However, the imperfections in the probabilistic nature of IBLTs are (fortunately) tilted in our favor when it comes to trying to append multiple blocks during the same epoch while preventing double spends.
This is because:
  • False-positives are almost impossible, but
  • False-negatives are totally impossible.
So:
  • in the worse case, IBLTs might RARELY incorrectly tell us that two blocks are unsafe to both append to the chain (ie, that the intersection of their UTXOs is non-empty)
  • but IBLTSs will NEVER incorrectly tell us that two blocks are both safe to append (ie, that their intersection is empty).
This is exactly the kind of behavior we want.
Bonus if we could figure out a way to harness IBLT hashing the same way we currently harness SHA hashing (eg, have miners increment a "nonce" with each IBLT hash attempt, until all IBLT false positives are eliminated which incorrectly claimed that two blocks had intersecting UTXO sets).
submitted by ydtm to btc [link] [comments]

Question about probability

I'm working on a math-centric research paper on Bitcoins, and I've noticed something that doesn't sit well with me that I'd like to clear up, if possible.
A lot of people seem to make the comparison that the math behind generating a block is similar to this situation:
You have a hat with 100 slips of paper, numbered 1-100. You draw a slip from the hat, and if it is < 15, you win. If not, you put it back in the hat and draw again. You do this until you find a slip with value < 15.
In this scenario, you have a chance of success = .15 (unless I'm being idiotic right now). Each time you pull a slip you have the same probability of success; that is, each pull is independent. The probability is constant.
Here is a forum post that uses this comparison
Even on the bitcoin wiki, it says that the probability of success remains constant:
There's no such thing as being 1% towards solving a block. You don't make progress towards solving it. After working on it for 24 hours, your chances of solving it are equal to what your chances were at the start or at any moment. Believing otherwise is what's known as the Gambler's fallacy
Source
Now, that's all well and good, but isn't bitcoin mining inherently different from this because it relies on hash functions?
My problem with this idea is that if you hash your block header, you get one output, and it's wrong. Then you increment your nonce and hash again, and get a completely different hash. You keep doing this and you keep getting completely different hashes. Because that's how (good) hash functions work. If you got the same hash, for two or more different inputs at any point, you found a collision in SHA-256 and that is a very, very bad thing for a hash function to have (and at this time, we don't think that SHA-256 has any collisions. It's still possible that there will be collisions, but for the sake of this argument, we should assume that it works exactly as it ought to, with no collisions).
So, couldn't you say that there are 2256 possible hashes, and when you hash a block header the first time and it's wrong, you've eliminated that possibility? So now there are 2256 - 1 possible hashes to try? And after n attempts, there are 2256 - n possible hashes? Similar to picking a slip of paper from the hat and removing it entirely, instead of putting it back?
I understand that the probability is still really, really low, but it IS increasing, isn't it? Every time you generate a new hash, you're getting 1 step closer to solving the block?
Or am I missing something completely?
This honestly isn't even important for my paper, it's just really bugging me right now.
submitted by UnhappyHobo to Bitcoin [link] [comments]

Tree-chains Preliminary Summary - Peter Todd

Reposted from the Bitcoin development list on mailarchive.com. The author is Peter Todd.
https://www.mail-archive.com/[email protected]/msg04388.html

Tree-chains Preliminary Summary

Introduction

Bitcoin doesn't scale. There's a lot of issues at hand here, but the most fundemental of them is that to create a block you need to update the state of the UTXO set, and the way Bitcoin is designed means that updating that state requires bandwidth equal to all the transaction volume to keep up with the changes to what set. Long story short, we get O(n2) scaling, which is just plain infeasible.
So let's split up the transaction volume so every individual miner only needs to keep up with some portion. In a rough sense that's what alt-coins do - all the tipping microtransactions on Doge never have to hit the Bitcoin blockchain for instance, reducing pressure on the latter. But moving value between chains is inconvenient; right now moving value requires trusted third parties. Two-way atomic chain transfers does help here, but as recent discussions on the topic showed there's all sorts of edge cases with reorganizations that are tricky to handle; at worst they could lead to inflation.
So what's the underlying issue there? The chains are too independent. Even with merge-mining there's no real link between one chain and another with regard to the order of transactions. Secondly merge-mining suffers from 51% attacks if the chain being merge-mined doesn't have a majority of total hashing power... which kinda defeats the point if we're worried about miner scalability.

Blocks and the TXO set as a binary radix tree

So how can we do better? Start with the "big picture" idea and take the linear blockchain and turn it into a tree:
 ┌───────┴───────┐ ┌───┴───┐ ┌───┴───┐ ┌─┴─┐ ┌─┴─┐ ┌─┴─┐ ┌─┴─┐ ┌┴┐ ┌┴┐ ┌┴┐ ┌┴┐ ┌┴┐ ┌┴┐ ┌┴┐ ┌┴┐ 
Obviously if we could somehow split up the UTXO set such that individual miners/full nodes only had to deal with subsets of this tree we could significantly reduce the bandwidth that any one miner would need to process. Every transaction output would get a unique identifier, say txoutid=H(txout) and we put those outputs in blocks appropriately.
We can't just wave a magic wand and say that every block has the above structure and all miners co-ordinate to generate all blocks in one go. Instead we'll do something akin to merge mining. Start with a linear blockchain with ten blocks. Arrows indicate hashing:
a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽ a4 ⇽ a5 ⇽ a6 ⇽ a7 ⇽ a8 ⇽ a9 
The following data structure could be the block header in this scheme. We'll simplify things a bit and make up our own; obviously with some more effort the standard Satoshi structures can be used too:
struct BlockHeader: uint256 prevBlockHash uint256 blockContentsHash uint256 target uint256 nonce uint time 
For now we'll say this is a pure-proof-of-publication chain, so our block contents are very simple:
struct BlockContents: uint256 merkleRoot 
As usual the PoW is valid if H(blockHeader) < blockHeader.target. Every block creates new txouts, and the union of all such txouts is the txout set. As shown previously(1) this basic proof-of-publication functionality is sufficient to build a crypto-currency even without actually validating the contents of the so-called transaction outputs.
The scalability of this sucks, so let's add two more chains below the root to start forming a tree. For fairness we'll only allow miners to either mine a, a+b, or a+c; attempting to mine a block with both the b and c chains simultaneously is not allowed.
struct BlockContents: uint256 childBlockHash # may be null bool childSide # left or right uint256 merkleRoot 
Furthermore we shard the TXO space by defining txoid = H(txout) and allowing any txout in chain a, and only txouts with LSB=0 in b, LSB=1 in c; the beginning of a binary radix tree. With some variance thrown in we get the following:
 b0 ⇽⇽ b1 ⇽⇽⇽⇽⇽ b2 ⇽ b3 ⇽ b4 ⇽ b5 ⇽ b6 ⇽ b7 ⇽ b8 ↙ ↙ a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽⇽⇽⇽⇽⇽ a4 ⇽ a5 ⇽ a6 ⇽ a7 ⇽ a8 ↖ ↖ ↖ ↖ ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 ⇽⇽⇽⇽⇽⇽ c4 ⇽ c5 ⇽ c6 ⇽⇽⇽⇽⇽⇽ c7 
We now have three different versions of the TXO set: ∑a, ∑a + ∑b, and ∑a+∑c. Each of these versions is consistent in that for a given txoutid prefix we can achieve consensus over the contents of the TXO set. Of course, this definition is recursive:
a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽⇽⇽⇽⇽⇽ a4 ⇽ a5 ⇽ a6 ⇽ a7 ⇽ a8 ↖ ↖ ↖ ↖ ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 ⇽⇽⇽⇽⇽⇽ c4 ⇽ c5 ⇽ c6 ⇽⇽⇽⇽⇽⇽ c7 ↖ ↖ ↖ ↖ ↖ d0 ⇽ d1 ⇽⇽⇽⇽⇽⇽ d2 ⇽⇽⇽⇽⇽⇽ d3 ⇽ d4 ⇽⇽⇽ d5 ⇽⇽⇽⇽ d6 
Unicode unfortunately lacks 3D box drawing at present, so I've only shown left-sided child chains.

Herding the child-chains

If all we were doing was publishing data, this would suffice. But what if we want to syncronize our actions? For instance, we may want a new txout to only be published in one chain if the corresponding txout in another is marked spent. What we want is a reasonable rule for child-chains to be invalidated when their parents are invalidated so as to co-ordinate actions across distant child chains by relying on the existance of their parents.
We start by removing the per-chain difficulties, leaving only a single master proof-of-work target. Solutions less than target itself are considered valid in the root chain, less than the target << 1 in the root's children, << 2 in the children's children etc. In children that means the header no longer contains a time, nonce, or target; the values in the root block header are used instead:
struct ChildBlockHeader: uint256 prevChildBlockHash uint256 blockContentsHash 
For a given chain we always choose the one with the most total work. But to get our ordering primitive we'll add a second, somewhat brutal, rule: Parent always wins.
We achieve this moving the child block header into the parent block itself:
struct BlockContents: ChildBlockHeader childHeader # may be null (zeroed out) bool childSide # left or right bytes txout 
Let's look at how this works. We start with a parent and a child chain:
a0 ⇽ a1 ⇽ a2 ⇽ a3 ↖ ↖ b0 ⇽ b1 ⇽ b2 ⇽ b3 ⇽ b4 ⇽ b5 
First there is the obvious scenario where the parent chain is reorganized. Here our node learns of a2 ⇽ a3' ⇽ a4':
 ⇽ a3' ⇽ a4' a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽ X ↖ ↖ b0 ⇽ b1 ⇽ b2 ⇽ b3 ⇽ X 
Block a3 is killed, resulting in the orphaning of b3, b4, and b5:
a0 ⇽ a1 ⇽ a2 ⇽ a3' ⇽ a4' ↖ b0 ⇽ b1 ⇽ b2 
The second case is when a parent has a conflicting idea about what the child chian is. Here our node receives block a5, which has a conflicting idea of what child b2 is:
a0 ⇽ a1 ⇽ a2 ⇽ a3' ⇽ a4' ⇽ a5 ↖ ↖ b0 ⇽ b1 ⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽ b2' ⇽ b2 ⇽ X 
As the parent always wins, even multiple blocks can get killed off this way:
a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽ a4 ↖ b0 ⇽ b1 ⇽ b2 ⇽ b3 ⇽ b4 ⇽ b5 ⇽ b6 ⇽ b7 
to:
a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽ a4 ⇽ a5 ↖ ↖ b0 ⇽ b1 ⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽ b2' ⇽ b2 ⇽ b3 ⇽ b4 ⇽ b5 ⇽ X 
This behavior is easier to understand if you say instead that the node learned about block b2', which had more total work than b2 as the sum total of work done in the parent chain in blocks specifying the that particular child chain is considered before comparing the total work done in only the child chain.
It's important to remember that the parent blockchain has and validates both childrens' block headers; it is not possible to mine a block with an invalid secret of child headers. For instance with the following:
a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽ a4 ↖ ↖ ↖ b0 ⇽ b1 ⇽ b2 ⇽ b3 ⇽ b4 ⇽ b5 ⇽ b6 ⇽ b7 
I can't mine block a5 that says following b2 is b2' in an attempt to kill off b2 through b7.

Token transfer with tree-chains

How can we make use of this? Lets start with a simple discrete token transfer system. Transactions are simply:
struct Transaction: uint256 prevTxHash script prevPubKey script scriptSig uint256 scriptPubKeyHash 
We'll say transactions go in the tree-chain according to their prevTxHash, with the depth in the tree equal to the depth of the previous output. This means that you can prove an output was created by the existance of that transaction in the block with prefix matching H(tx.prevTxHash), and you can prove the transaction output is unspent by the non-existance of a transaction in the block with prefix matching H(tx).
With our above re-organization rule everything is consistent too: if block bi contains tx1, then the corresponding block c_j can contain a valid tx2 spending tx1 provided that c_j depends on a_p and there is a path from a_p to b(i+k). Here's an example, starting with tx1 in c2:
 b0 ⇽⇽⇽⇽⇽⇽ b1 ↙ a0 ⇽ a1 ⇽ a2 ↖ c0 ⇽ c1 ⇽ c2 
Block b2 below can't yet contain tx2 because there is no path:
 b0 ⇽⇽⇽⇽⇽⇽ b1 ⇽ b2 ↙ a0 ⇽ a1 ⇽ a2 ↖ c0 ⇽ c1 ⇽ c2 
However now c3 is found, whose PoW solution was also valid for a3:
 b0 ⇽⇽⇽⇽⇽⇽ b1 ⇽ b2 ↙ a0 ⇽ a1 ⇽ a2 ⇽ a3 ↖ ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 
Now b3 can contain tx2, as b3 will also attempt to create a4, which depends on a3:
 b0 ⇽⇽⇽⇽⇽⇽ b1 ⇽ b2 ⇽ b3 ↙ a0 ⇽ a1 ⇽ a2 ⇽ a3 ↖ ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 
Now that a3 exists, block c2 can only be killed if a3 is, which would also kill b3 and thus destroy tx2.

Proving transaction output validity in a token transfer system

How cheap is it to prove the entire history of a token is valid from genesis? Perhaps surprisingly, without any cryptographic moon-math the cost is only linear!
Remember that a transaction in a given chain has committed to the chain that it can be spent in. If Alice is to prove to Bob that the output she gave him is valid, she simply needs to prove that for every transaction in the histroy of the token the token was created, remained unspent, then finally was spent. Proving a token remained unspent between blocks b_n and b_m is trivially possible in linear size. Once the token is spent nothing about blocks beyond b_m is required. Even if miners do not validate transactions at all the proof size remains linear provided blocks themselves have a maximum size - at worst the proof contains some invalid transactions that can be shown to be false spends.
While certainly inconvenient, it is interesting how such a simple system appears to in theory scale to unlimited numbers of transactions and with an appropriate exchange rate move unlimited amounts of value. A possible model would be for the the tokens themselves to have power of two values, and be split and combined as required.

The lost data problem

There is however a catch: What happens when blocks get lost? Parent blocks only contain their childrens' headers, not the block contents. At some point the difficulty of producing a block will drop sufficiently for malicious or accidental data loss to be possible. With the "parent chain wins" rule it must be possible to recover from that event for mining on the child to continue.
Concretely, suppose you have tx1 in block c2, which can be spent on chain b. The contents of chain a is known to you, but the full contents of chain b are unavailable:
 b0 ⇽ b1 (b) (b) ↙ ↙ ↙ a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽ a4 ⇽ a5 ↖ ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 ⇽ c4 ⇽ c5 
Blocks a3 and a4 are known to have children on b, but the contents of those children are unavailable. We can define some ratio of unknown to known blocks that must be proven for the proof to be valid. Here we show a 1:1 ratio:
 ⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽⇽ b0 ⇽ b1 (b) (b) b2 ⇽ b3 ⇽ b4 ⇽ b5 ⇽ b6 ⇽ b7 ↙ ↙ ↙ ↙ ↙ ↙ a0 ⇽ a1 ⇽ a2 ⇽ a3 ⇽ a4 ⇽ a5 ⇽ a6 ⇽ a7 ⇽ a8 ⇽ a9 ↖ ↖ ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 ⇽ c4 ⇽ c5 ⇽ c6 ⇽ c7 ⇽ c8 ⇽ c9 
The proof of now shows that while a3 and a4 has b-side blocks, by the time you reach b6 those two lost blocks were in the minority. Of course a real system needs to be careful that mining blocks and then discarding them isn't a profitably way to create coins out of thin air - ratios well in excess of 1:1 are likely to be required.

Challenge-response resolution

Another idea is to say if the parent blockchain's contents are known we can insert a challenge into it specifying that a particular child block be published verbatim in the parent. Once the challenge is published further parent blocks may not reference that children on that side until either the desired block is re-republished or some timeout is reached. If the timeout is reached, mining backtracks to some previously known child specified in the challenge. In the typical case the block is known to a majority of miners, and is published, essentially allowing new miners to force the existing ones to "cough up" blocks they aren't publishing and allow the new ones to continue mining. (obviously some care needs to be taken with regard to incentives here)
While an attractive idea, this is our first foray into moon math. Suppose such a challenge was issued in block a2, asking for the contents of b1 to be published. Meanwhile tx1 is created in block c3, and can only be spent on a b-side chain:
 b0 ⇽ b1 ↙ a0 ⇽ a1 ⇽ (a2) ⇽ a3 ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 
The miners of the b-chain can violate the protocol by mining a4/b1', where b1' appears to contain valid transaction tx2:
 b0 ⇽ b1 b1' ↙ ↙ a0 ⇽ a1 ⇽ (a2) ⇽ a3 ⇽ a4 ↖ c0 ⇽ c1 ⇽ c2 ⇽ c3 
A proof of tx2 as valid payment would entirely miss fact that the challenge was published and thus not know that b1' was invalid. While I'm sure the reader can come up with all kinds of complex and fragile way of proving fraud to cause chain a to be somehow re-organized, what we really want is some sub-linear proof of honest computation. Without getting into details, this is probably possible via some flavor of sub-linear moon-math proof-of-execution. But this paper is too long already to start getting snarky.

Beyond token transfer systems

We can extend our simple one txin, one txout token transfer transactions with merkle (sum) trees. Here's a rough sketch of the concept:
input #1─┐ ┌─output #1 ├┐ ┌┤ input #2─┘│ │└─output #2 ├─┤ input #3─┐│ │┌─output #3 ├┘ └┤ input #4─┘ └─output #4 
Where previously a transaction committed to a specific transaction output, we can make our transactions commit to a merkle-sum-tree of transaction outputs. To then redeem a transaction output you prove that enough prior outputs were spend to add up to the new output's value. The entire process can happen incrementally without any specific co-operation between miners on different parts of the chain, and inputs and outputs can come from any depth in the tree provided that care is taken to ensure that reorganization is not profitable.
Like the token transfer system proving a given output is valid has cost linear with history. However we can improve on that using non-interactive proof techniques. For instance in the linear token transfer example the history only needs to be proven to a point where the transaction fees are higher than the value of the output. (easiest where the work required to spend a txout of a given value is well defined) A similar approach can be easily taken with the directed-acyclic-graph of mutliple-input-output transactions. Secondly non-interactive proof techniques can also be used, again out of the scope of this already long preliminary paper.
1) "Disentangling Crypto-Coin Mining: Timestamping, Proof-of-Publication, and Validation",
http://www.mail-archive.com/bitcoin-development%40lists.sourceforge.net/msg03307.html
submitted by isysd to Blocktrees [link] [comments]

Proof-of-key blockchain

Hi, this is a repost from /crypto where it was out of topic (https://www.reddit.com/crypto/comments/6vdfoc/proofofkey_blockchain/).
/*****************************************************************************/
Hello everyone. I've been thinking about a light alternative proof-of-(work/stake) algorithm for blockchains that doesn't imply hardware/electricy race. I'd like to request for your comments about it.
The reason why such exponential investment is made into this hardware/energy is because it is proportional to the chances of winning the proof-of-* race. The proposed algorithm to avoid such a race is to determine the winner before the race starts, with almost zero CPU power needed to discover its identity.
Let's consider that an arbitrary amount of coins have been pre-mined and sold to fund the development (e.g. 5%). In order to get a chance to be rewarded with newly mined coins and fees for discovering a new block, a node needs to have one or more reward-keypair(s). Such reward-keys can only be buyed/registered on the blockchain, its price must be set to at least the current number of coins rewarded for discovering a new block, let's say 50 coins for the first years, like for Bitcoin (1).
Buying/registering a new reward-key on the blockchain is like buying new rig hardware, the more you have on your node, the more you increase your chances to win the race (2). For every node to unanimously agree on the winner, they all need to work on the very same block of transactions, I explain later how I think this goal can be achieved. Then, a simple checksum hash needs to be computed by every nodes on the new block, it is made of the previous head block's nonce appended by the ordered sum of outgoing transactions' addresses (3), it must have the same length in bits as the pubic-reward-keys (e.g. 256 bits), the public-reward-key that is the closest to this hash is the winner (nearest neighbor matching like with LSH). The node that happens to be the winner (who owns the corresponding private-reward-key) has to claim the block by signing the totality of its data (block's head index on the chain, ordered transactions in full, plus its reward transaction) and broadcast its claim for other nodes to validate and add it to their blockchain's head (4). If the block is not claimed, it can be for multiple reasons, blockchain fork (nodes not working on the very same block of transactions because of accidental or malicious cacophony), network latency, or simply the wining node being down. But I think these cases can be dealt with securely, as explained below.
In order to be sure that every node of the network is working on the very same block of transactions at the very same time, some rigorous synchronization has to be set up, with carrot and stick for the participating nodes. First thing is to prevent transactions from being constantly broadcast, otherwise because of propagation delay the data of the new block would always be in an inconsistent state among the different nodes. As the delay for having data propagated to 99% of a P2P network (Bitcoin) appears to be about 40 seconds (4), I propose an arbitrary "pulse window" of 20 seconds for nodes to initiate the broadcast of their transactions (they need to synchronize at startup via NTP), followed by 40 seconds of retention of new transactions (meanwhile new transactions are being queued in each node, waiting for the next pulse), for letting the time of all the transactions to reach the totality of the network. So, there is one broadcast pulse every minute (20+40), as well as one new block. If any node do not play the game (wrongdoing, miss-configuration, bad QoS, etc.) that triggers cacophony, the network will have to identify and ban them (5) at the next pulse. On the other hand, nodes that provide good synchronization, QoS, etc. will be rewarded by receiving a part of the fees of the transactions that they have initially broadcast. To do so, transactions and their entry node need to identify each other reciprocally. Each transaction identifies the entry node chosen for broadcast, as well as the node signs the transaction (or preferably a whole transactions' batch in a single network packet). Node identification is done via one of its reward-key(s).
If some transactions are sent too late, not reaching the totality (99.9%) of the network (likely to be initially broadcast around the 55th second, just before the end of the 20+40 seconds pulse (4), instead of the dedicated initial 20 seconds pulse window, because of intentional cacophony malice, or miss-configuration, bad QoS being more unlikely for such a long lag), then the blockchain's working head will be forked into multiple heads. Therefore, the probability of finding the next block will be divided by the number of different forked heads (proportionally to the respective number of nodes working on each forked head). Let's take an arbitrary case scenario where the blockchain gets forked into 3 equally distributed heads, each representing 33.3% of the nodes, the respective chances to find each of these 3 different forked blocks is divided by 3 (for each forked head block there is a 66.6% chance that the winning reward-key is working on another block, therefore won't claim it). Thus, after 2 or 3 iteration pulses (or even only one), the entirety of the network will find the block discovery/validation rate dramatically drop, which will trigger nodes to enter "cacophony mode", stopping to emit transactions, and broadcast the blocks they are working on after the cacophony was detected (and maybe one or two blocks before that as an uncertainty margin), as well as the signature of the block's hash by the node (6). After few seconds/minutes, all the nodes will have gathered a reference copy of all different versions of blocks being worked on, along with the number of times they have been signed (a.k.a in which proportions a specific version of a block were spread amongst the network). All nodes now have an accurate snapshot of the total topology and consistency the network, few blocks backward from the blockchain's head, before the fork happened. Then nodes can independently compare blocks, whitelisting every nodes that had their transactions registered on every blocks (meaning they were broadcast on time), baning those that are on some blocks but not other popular ones (7), therefore the network self-heals by purging bad nodes, and resume mining by rolling back to the last block that was mined before the cacophony started.
In the case of a node suspecting cacophony because being in the fringe of the network or out-of-sync (thus not receiving transactions on proper time), other nodes won't be in "cacophony mode", so the node will find itself lonely by not receiving any/enough different block versions (along with their signed hashes), therefore it will know that there is no cacophony, but bad QoS or configuration, this will need to be fixed by resync NTP, re-configure, change peers, sys-admin intervention, etc. They'll have to catch up quickly not to miss the race/reward.
In the case of a block not being claimed because of the winner node being down, the network would enter in "cacophony mode" as well, but figure out that it is consistent, therefore simply blacklisting the winning public-reward-key of the unclaimed block, until it gets unlocked by a dedicated "unlock message", signed with its corresponding private-reward-key when the node gets back online.
There might be plenty of smallebigger flaws that I did not think about, I'd like to request for your help in identifying and hopefully fixing them. I've been thinking that rich wrongdoers could escape the carrot and stick policy constraint by buying reward-keys with the only goal to prevent the network from taking off, provoking endless cacophony. I think this can be fixed by adjusting the price of the reward-keys over time (1), or even using a non-mandatory collaborative blacklist system for the early stage of network growth, until the price of reward-keys becomes dissuasive for a performing real prejudicial sabotage, even for rich wrongdoers. Also, because there is no CPU constraint for calculating blocks, it would be easy for anyone to forge a longer chain, however I'm not sure that the longer chain policy is the best here, and such forged chains could be easily detected because of a too much redundant winners' identity (not representative of the global reward-key pool), and not to mention that it cannot be broadcast as nodes do not get new blocks from the network but calculate them internally.
What do you think?
Thanks,
Camille.
(1) Price for buying/registering a new reward-key cannot be lower to the number of coins rewarded for finding a block to prevent their number to be exponential, but it could/should be higher to prevent rich wrongdoers to buy many and use them to disturb the network, it could also maintain the size of the network to a consistent state. Here we take the example of 50 coins per reward-key, which means one every minute, one every few hours sounds more reasonable and manageable, but this is outside of the scope of this post.
(2) A special transaction has to be done for purchasing a reward-key, unlike when simply spending coins with outgoing/incoming wallet address, here you send your self-generated public-reward-key (needless to say while keeping the private key private) along with your 50 coins, in return the network makes the 50 coins available again to miners as a reward for the next block discovery, and register your public-reward-key on the blockchain. The reverse operation to destroy the reward-key for getting the 50 coins reimbursed should be possible, as well a replacing a reward-key by a new one if suspected by the owner of being corrupted/stolen. The 50 coins given when finding a new block (or being reimbursed) are made available again from a previous purchase(s), or newly created if this coin reserve is empty. The available monetary mass may inflate or shrink depending of the market demand for reward-keys (mining) or liquidity, this policy can be discussed and algorithmically adjusted/limited in the specs (e.g. coins made available again after buying rewards-keys cannot represent more than 10% of the minted coins).
(3) We use outgoing transaction's addresses because they cannot be forged on-the-fly to alter the resulting hash. If we use the full transaction for calculating the "winning hash", nodes could try to forge and inject one transaction at the last second, playing with decimals to get the closest result to one of their public-reward-key, which would incite again for a hardware/electricity race.
(4) http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf
(5) Quarantine duration should be incremental for each ban, e.g.: 3h, 12h, 72h, 2 weeks, 4 months, one year, etc.
(6) Any node signing more than one different block for the same head number will be banned (5) and its data ignored.
(7) In "cacophony mode" marginal blocks that are not widespread and lacking transactions number should be ignored, they are more likely to be on the fringe of the network, not having received some transactions on time because of QoS-like issues.
submitted by mammique to CryptoCurrencies [link] [comments]

Mining question: Doesn't mining hardware simply guess at a nonce then test if it is a solution? There is no incremental "solving" of a nonce correct? Only speed of nonce guessing and testing correct? what is the length of the current nonce to be found?

Mining question: Doesn't mining hardware simply guess at a nonce then test if it is a solution? There is no incremental "solving" of a nonce correct? Only speed of nonce guessing and testing correct? what is the length of the current nonce to be found?
Inother words the word "solving " a block isn't really accurate right?
When a miner is closing in on a nonce that will solve that block it is not pratially solving the nonce then solving the last part of the nonce. each nonce is in no way predictable for the first part or it or the last nonce tested for that block correct?
So using the word solving is really sort of a bad usage.
It is simply guessing and testing as fast as it can with other miners inthe mining pool (if in a pool).
Also what is the length of the current nonce being solved and wher eis that shown (somewhere in a report on blockchain. info no doubt?)
edit:
looking at this https://en.bitcoin.it/wiki/Nonce
it seems what I should be asking is how many zeros must the hash have in recent blocks.
lokking at recent blocks here:
https://blockchain.info/block-index/454092
" Difficulty 1,180,923,195.2580261 Bits 419668748 Size 0.2548828125 KB Version 2 Nonce 3125706805"
the bits and difficulty values are not changing.
the nonce length is jumping all over the place so apparently the nonce length has no reltaionship to the end hash length or difficulty.
what exactly is the "bits" value?
what exactly is the "difficulty" value? I know they increase difficulty to ensure a new block is created on average in a certain time period but what does the actual number mean or express?
submitted by georedd to Bitcoin [link] [comments]

Bitcoin Mining im Detail erklärt: Nonce, Merkle Root, SPV...  Teil 15 Kryptographie Crashkurs Nonce – Definition, Meaning, Review, Description, Example, Proof-Of-Work Bitcoin Blockchain Explained [Lesson 4] 5 Cryptocurrency Investment Tips That You Should Know Bitcoin Q&A: Nonces, mining, and quantum computing free bitcoin mining sites without investment 2020  bitcoin mining without investment payment proof

How much bitcoin each wallet contains is known by the system as a whole, through its "shared ledger" of transactions. A bank also has a ledger, but the bank can put in the ledger what it wants. In bitcoin, nobody can really tamper with the ledger. In bitcoin, this ledger exists in the form of a "block chain". Simplified, this is a list of Bitcoin mining the hard way: the algorithms, protocols, and bytes Most nonce generators just increment by 1 but the key is where they start. If you are solo mining, you can pick a random number. If you are mining with multiple devices or you are a pool administrator, you have to divide the work to avoid calculating the same hash twice (make There are only 2 32 nonce values, and the current difficulty is >2 32; meaning that the network will go through more than 2 32 nonce spaces (each with 2 32 hashes) per block. Virtually all mining equipment starts the header nonce at zero and just increments. The nonce is simply a random number that is added to the block header for no other reason than to give us something to increment in an attempt to produce a valid hash. If your first attempt at hashing the header produces an invalid hash, you just add one to the nonce and rehash the header then check to see if that hash is valid. The idea is that you can increment merkle_nonce until the chains you're mining don't clash for the same slot. The trouble is that this doesn't work; because it just adds a number derived from the merkle_nonce to the chain_id, if two chains clash for one nonce they'll still clash for all possible nonces. [1]

[index] [27071] [30756] [4235] [18607] [4513] [9790] [23931] [24782] [25860] [3922]

Bitcoin Mining im Detail erklärt: Nonce, Merkle Root, SPV... Teil 15 Kryptographie Crashkurs

My videos are about Bitcoin, Ethereum, Blockchain and crypto currencies in general, to avoid scam, rip-off and fraud especially in mining. I'm talking about how you can invest wisely and do it ... Who generates the nonce? What makes it random? How is nonce-guessing important to the competitive process of mining? What happens if the hashing algorithm (SHA-256) was compromised? Is quantum ... Bitcoin nonce example: The "nonce" in a bitcoin block is a 32-bit (4-byte) field whose value is set so that the hash of the block will contain a run of leading zeros. The rest of the fields may ... What is the nonce? Is it possible guess it on the first try? How is the nonce found in mining pools? When a miner wins the block reward, how does the block know which address to pay? When does a ... Bitcoin Mining im Detail erklärt: Nonce, Merkle Root, SPV... Teil 15 Kryptographie Crashkurs ... Hier mein neuestes Buch "Blockchain 2.0 - mehr als nur Bitcoin" ...

Flag Counter