How to Obscure Bitcoin and Bitcoin Cash Transactions
How to Obscure Bitcoin and Bitcoin Cash Transactions
Major Bitcoin Core DoS Vulnerability Has Been Fixed
Crypto Developer Warns “Bitcoin Cash May Introduce Fatal
Bitcoin Core version 0.15.1 released
Avoid treating null RPC arguments different from missing
Reddcoin (RDD) Core Wallet Release - v3.10.0rc4 Core Staking (PoSV v2) Wallet including MacOS Catalina and more!
https://github.com/reddcoin-project/reddcoin/releases/tag/v3.10.0rc4 Reddcoin (RDD) Core Dev team releases v3.10.0rc4 Core Wallet. Includes full MacOS Catalina support, Bitcoin 0.10 codebase features, security and other enhancements. Full changelog available on github, complete release notes to be published with full 3.10.0 release anticipated shortly. NOTE: This v3.10.0rc4 code is pre-release, but may be used on mainnet for normal operations. This final "release candidate" version addresses an issue identified where the individual posv v2 stake transaction could be modified such that no funds went to the developer. - See Issue #155 for description. Also includes additional components of enhanced build system; Travis continuous integration (CI) and Transifex translations. Prerelease v3.10.0rc4 binary code is not certificate signed. To assist in translations, correct text, or add languages, please join the following team: https://www.transifex.com/reddcoin/reddcoin/qt-translation-v310/ To assist in other aspects of the Reddcoin project, please contact TechAdept or any member of the team. Bootstrap (zipped folder of blockchain, date of upload 5-1-20) may be downloaded here if required: https://drive.google.com/file/d/1ItVFGiDyIH5SfCNhfrj29Qavg8LWmfZy/view?usp=sharing Commits included since rc3: 2a8c7e6 Preparations for 3.10.0 rc4 4a6f398 Update translations 7aa5151 build: update reference time to something more recent 1a65b8c Update translations d4a1ca6 transifex: update translation instructions a03895b transifex: update config for this release 51ad1e0 move check before supermajority reached 794680f Make check for developer address when receiving block 457503e travis: Remove group: legacy 97d3a2a travis: Remove depreciated sudo flag 21dcfa6 docs: update release notes 7631aac update error messages 5b41e31 check that the outputs of the stake are correct. 9bd1820 travis: test with wallet enabled 55f2dd5 fix reference to Reddcoin 220f404 travis: disable libs for windows builds (temp) b044e0f depends: qt update download source path 2fe2d85 depends: set new download source 4cf531e remove duplicated entry 0d8d0da travis: diable tests e13ad81 travis: manually disable sse2 support for ARM processors 1f62045 travis: fix crash due to missing (and not required) package 0fb3b75 travis: update path 9d6a642 docs: update travis build status badge with correct path https://github.com/reddcoin-project/reddcoin/releases/tag/v3.10.0rc4
Our first generation hardware wallets were made of military-grade aerospace aluminum. We’ve stripped all that down to just focus on air-gapping your private keys.
https://preview.redd.it/0rogeunfujv41.png?width=1024&format=png&auto=webp&s=8a2cf5eff6f30a36fd7e86e16331eb40b4072627 Hey bitcoin! I'm Lixin, longtime bitcoiner and creator of Cobo Vault. I come from a background in the electronic hardware industry, and experienced one of my products being featured in Apple Stores around the world. Back in 2018 Cobo CEO Discus Fish, who also co-founded F2Pool, invited me to help build Cobo’s hardware product line. As we had strong ties to miners in China, we naturally designed the 1st gen with them in mind. In China, mining farms are nearly always built in very isolated places where there is very cheap wind or water electricity. When we built our 1st generation Cobo Vault hardware wallet, we needed to maximize the durability of the device in addition to its security. We used aerospace aluminum rather than plastic and made it completely IP68 waterproof, IK9 drop resistant, and military standard MIL-STD-810G durable for the mining industry. Things changed last year when I went to Bitcoin 2019 and talked to lots of hodlers in the States. I found that 95% of them don’t care about durability. I asked them if they were afraid of their home being flooded or burned down in a fire. The answer is - yes, they are afraid of these things, but see them as very low possibilities. Even if something were to happen, they said they would just buy another HW wallet for 100 dollars. From these conversations, it became more and more clear we should design a product around a normal hodler’s needs. Our 2nd gen product compromises on durability but doesn’t compromise on security. Most hodlers share some needs with miners:
Hodlers want a more air-gapped solution so we kept QR code data transmission between your hardware wallet and the companion app which is also auditable.
A Secure Element is the strongest wall of protection from physical attacks. We are the first hardware wallet - also maybe the first electronic product with SE - to have open source SE firmware.
A battery can be a significant weak point. The 2nd gen continues the legacy of detachable batteries to prevent corrosion damage and will also support AAA batteries in case your battery dies someday.
The 2nd gen also keeps the 4-inch touchscreen so you don’t need to suffer from tiny buttons and little screens anymore. Human error is one of the biggest reasons people lose their assets.
We kept other features like the self-destruct mechanism and Web Authentication, which prevent side-channel and supply chain attacks.
If you'd like to read more about these features, check out our blog posts. Aside from the legacy of the 1st gen, our 2nd gen product will have:
Open source hardware wallet application layer and Secure Element firmware code. With the open source firmware code, you can see: random number generation, master private key generation, key derivation, and the signing process all happen within the SE and your private keys never leave.
At the Bitcoin 2019 conference half the hodlers I met told me they own multiple hardware wallets which they use on the go. We added a fingerprint sensor you can use to authorize transactions without typing in your password. No need to worry about surveillance cameras when using your hardware wallet in airports.
We will also support PSBT (BIP174) to be compatible with third-party wallets like Electrum or Wasabi Wallet in case people have need of using Cobo Vault with their own node or coinjoin. Multisig between Cobo Vault and other wallets will be realized to prevent single point failure with any brand of hardware wallet.
By sacrificing the durability, we successfully controlled the price under 100 USD for the basic version.
BTC-only firmware version for people who want to minimize the codebase for less of an attack surface.
We truly appreciate the support from the community and are giving away free metal storage Cobo Tablets with every purchase of our 2nd gen for a week! Add a tablet to your cart and place your order before May 5th, 8 AM PST to claim your free metal storage. Find us on Twitter CryptoLixin and CoboVault - any suggestions or questions are welcome!
We’ve been working on a new product release for a year and want to hear your opinions on the product. Read on for product information and our vision for hardware wallets.
TL;DR Key features of Cobo Vault 2nd gen we are going to launch:
QR code air-gapped
Totally open source: Including firmware of the Secure Element
PSBT support and compatibility with other wallets
Fingerprint authorization to prevent password leak
Detachable battery to prevent battery corrosion and AAA support
Bitcoin-only firmware option
Hey bitcoin! I'm Lixin, a longtime Bitcoiner and creator of Cobo Vault. I come from a background in the electronic hardware industry, and experienced one of my products being featured in Apple Stores around the world. Although my interest goes back to 2010, my career intersected Bitcoin when Discus Fish (CEO of Cobo) invited me to help build Cobo’s hardware product line. Discus Fish is also the co-founder and CEO of f2pool, one of the largest mining pools currently in the world, and one of the earliest advocates of bitcoin in China. Back in 2018 we built our 1st generation Cobo Vault hardware wallet. As we had strong ties to miners in China, we naturally designed the 1st gen with them in mind. For those who are not familiar with the mining industry in China, mining farms are nearly always built in very isolated places where there is very cheap wind or water electricity. As the miners would take their storage into these isolated regions, we needed to maximize the durability of the device in addition to its security. We used aerospace aluminum rather than plastic and made it completely IP68 waterproof. We also gave it a hardshell metal case you can put it in, which is IK9 drop resistant and passes the American military durability test MIL-STD-810G. As for the electronic components inside the device, in order to maximize security, we made it as air-gapped as possible with QR codes. We see this as an important choice because USB cables and Bluetooth are not transparent and have a bigger attack surface. With QR codes you can see exactly what is going on and do not have to connect to a laptop which could have malware on it. QR code interaction needs a camera and a more complicated system which needs to be supported by high-level chips. All these come with a cost, and the 1st generation isn’t as accessible for average hodlers. For more details on the product, visit here. Things changed last year when I went to Bitcoin 2019 and talked to lots of hodlers in the States. I found that 95% of them don’t care about durability. I asked them if they were afraid of their home being flooded or burned down in a fire. The answer is - yes, they are afraid of these things, but see them as very low possibilities. Even if something were to happen, they said they would just buy another HW wallet for 100 dollars. From these conversations, it became more and more clear that the position for miners and hodlers is totally different. After coming back from that conference, our team began the almost one year journey of designing our 2nd gen product. It compromises on durability but doesn’t compromise on security. We designed the 2nd gen product all around a normal hodler’s needs. Obviously hodlers share some common needs with miners:
Hodlers want a more air-gapped solution so we took the legacy of QR code data transmission between your hardware wallet and the companion app from the 1st gen rather than using USB or Bluetooth.
A Secure Element is the strongest wall of protection from physical attacks. The 2nd gen will also have a Secure Element.
Some hodlers may touch their hardware wallet once every several months. With this time period, the battery could be a significant weak point. * The 2nd gen also continues the legacy of a detachable battery to prevent battery corrosion damage. In case the battery dies someday, 2nd gen will also support AAA batteries.
The 2nd gen also keeps the 4-inch touch screen. A touchscreen significantly increases ease of use - you don’t need to suffer from tiny buttons and little screens anymore. It also significantly lowers the possibility for human error, which is one of the biggest reasons that people lose their assets.
Continued other features like the self-destruct mechanism, which prevents side-channel attacks, and Web Authentication, which prevents supply chain attacks.
If you'd like to read more about these features, check out our blog posts here. Aside from these legacies from the 1st gen, our 2nd gen product will have some other big improvements:
With our 2nd gen we will open source the whole codebase when we launch in late Apr - including the firmware of the Secure Element. We are the first hardware wallet - also maybe the first electronic product with SE - to have open sourced the firmware of the SE. With the open source firmware code, you can see: Random number generation/master private key generation/key derivation/signing process all happens within the SE and your private keys never get out of the SE (maximizing protection against physical attacks).
At the Bitcoin 2019 conference half the hodlers I met told me they own multiple hardware wallets which they use on the go. So we added a fingerprint sensor you can use to authorize your transactions without typing in your password. No need to worry about surveillance cameras when using your hardware wallet in airports.
We will also support PSBT (BIP174) so that the device will be compatible with 3rd party wallets like Electrum or Wasabi Wallet in case people have need for using Cobo Vault with their own node or coinjoin. Also multisig between Cobo Vault and other wallets will be realized to prevent single point failure with a single brand of hardware wallet.
By sacrificing the durability of the device and its aluminum body/protective case and waterproof rating and lowering production costs, we successfully controlled the price under 200 USD. We wanted it to be at a price point most people in the community would see value in.
Personally, I am a bitcoin maximalist and also a big fan of the KISS principle. We will also release a BTC-only firmware version for people who want to minimize the codebase for less of an attack surface. Thank you for reading until here. More details like final price would be released later when we officially release the product in late Apr. Any suggestions or questions are welcome. Also you can find me @CryptoLixin or @CoboVault on Twitter! Ears are widely open!
Transcript of Bitcoin ABC’s Amaury Sechet presenting at the Bitcoin Cash City conference on September 5th, 2019
I tried my best to be as accurate as possible, but if there are any errors, please let me know so I can fix. I believe this talk is important for all Bitcoin Cash supporters, and I wanted to provide it in written form so people can read it as well as watch the video:https://www.youtube.com/watch?v=uOv0nmOe1_oFor me, this was the first time I felt like I understood the issues Amaury's been trying to communicate, and I hope that reading this presentation might help others understand as well. Bitcoin Cash’s Culture “Okay. Hello? Can you hear me? The microphone is good, yeah? Ok, so after that introduction, I’m going to do the only thing that I can do now, which is disappoint you, because well, that was quite something. So usually I make technical talks and this time it’s going to be a bit different. I’m going to talk about culture in the Bitcoin Cash ecosystem. So first let’s talk about culture, like what is it? It’s ‘the social behaviors and norms found in human society.’ So we as the Bitcoin Cash community, we are a human society, or at least we look like it. You’re all humans as far as I know, and we have social behaviors and norms, and those social behaviors and norms have a huge impact on the project. And the reason why I want to focus on that point very specifically is because we have better fundamentals and we have a better product and we are more useful than most other cryptos out there. And I think that’s a true statement, and I think this is a testimony of the success of BCH. But also, we are only just 3% of BTC’s value. So clearly there is something that we are not doing right, and clearly it’s not fundamental, it’s not product, it’s not usefulness. It’s something else, and I think this can be found somewhat in our culture. So I have this quote here, from Naval Ravikant. I don’t know if you guys know him but he’s a fairly well known speaker and thinker, and he said, “Never trust anyone who does not annoy you from time to time, because it means that they are only telling you what you want to hear.” And so today I am going to annoy you a bit, in addition to disappointing you, so yeah, it’s going to be very bad, but I feel like we kind of need to do it. So there are two points, mainly, that I think our culture is not doing the right thing. And those are gonna be infrastructure and game theory. And so I’m going to talk a little bit about infrastructure and game theory. Right, so, I think there are a few misconceptions by people that are not used to working in software infrastructure in general, but basically, it works like any other kind of infrastructure. So basically all kinds of infrastructure decay, and we are under the assumption that technology always gets better and better and better and never decays. But in terms of that, it actually decays all the time, and we have just a bunch of engineers working at many many companies that keep working at making it better and fighting that decay. I’m going to take a few examples, alright. Right now if you want to buy a cathode ray tube television or monitor for your computer (I’m not sure why you want to do that because we have better stuff now), but if you want to buy that, it’s actually very difficult now. There are very little manufacturers that even know how to build them. We almost forgot as a human society how to build those stuff. Because, well, there was not as high of a demand for them as there was before, and therefore nobody really worked on maintaining the knowledge or the know how, and the factories, none of that which are required to build those stuff, and therefore we don’t build them. And this is the same for vinyl discs, right? You can buy vinyl disk today if you want, but it’s actually more expensive than it used to be twenty years ago. We used to have space shuttles. Both Russia and US used to have space shuttles. And now only the US have space shuttles, and now nobody has space shuttles anymore. And there is an even better counter example to that. It’s that the US, right now, is refining Uranium for nuclear weapons. Like on a regular basis there are people working on that problem. Except that the US doesn’t need any new uranium to make nuclear weapons because they are decommissioning the weapons that are too old and can reuse that uranium to build the new weapon that they are building. The demand for that is actually zero, and still there are people making it and they are just basically making it and storing it forever, and it’s never used. So why is the US spending money on that? Well you would say governments are usually pretty good at spending money on stuff that are not very useful, but in that case there is a very good reason. And the good reason is that they don’t want to forget how it’s done. Because maybe one day it’s going to be useful. And acquiring the whole knowledge of working with uranium and making enriched uranium, refining uranium, it’s not obvious. It’s a very complicated process. It involves very advanced engineering and physics, a lot of that, and keeping people working on that problem ensures that knowledge is kept through time. If you don’t do that, those people are going to retire and nobody will know how to do it. Right. So in addition to decaying infrastructure from time to time, we can have zero days in software, meaning problems in the software that are not now exploited live on the network. We can have denial of service attack, we can have various failures on the network, or whatever else, so just like any other infrastructure we need people that essentially take care of the problem and fight the decay constantly doing maintenance and also be ready to intervene whenever there is some issue. And that means that even if there is no new work to be done, you want to have a large enough group of people that are working on that everyday just making it all nice and shiny so that when something bad happens, you have people that understand how the system works. So even if for nothing else, you want a large enough set of people working on infrastructure for that to be possible. So we’re not quite there yet, and we’re very reliant on BTC. Because the software that we’re relying on to run the network is actually a fork to the BTC codebase. And this is not specific to Bitcoin Cash. This is also true for Litecoin, and Dash, and Zcash and whatever. There are many many crypotos that are just a fork of the Bitcoin codebase. And all those crypos they actually are reliant on BTC to do some maintenance work because they have smaller teams working on the infrastructure. And as a result any rational market cannot price those other currencies higher than BTC. It would just not make sense anymore. If BTC were to disappear, or were to fail on the market, and this problem is not addressed, then all those other currencies are going to fail with it. Right? And you know that may not be what we want, but that’s kind of like where we are right now. So if we want to go to the next level, maybe become number one in that market, we need to fix that problem because it’s not going to happen without it. So I was mentioning the 3% number before, and it’s always very difficult to know what all the parameters are that goes into that number, but one of them is that. Just that alone, I’m sure that we are going to have a lower value than BTC always as long as we don’t fix that problem. Okay, how do we fix that problem? What are the elements we have that prevent us from fixing that problem? Well, first we need people with very specific skill sets. And the people that have experience in those skill sets, there are not that many of them because there are not that many places where you can work on systems involving hundreds of millions, if not billions of users, that do like millions of transactions per second, that have systems that have hundreds of gigabytes per second of throughput, this kind of stuff. There are just not that many companies in the world that operate on that scale. And as a result, the number of people that have the experience of working on that scale is also pretty much limited to the people coming out of those companies. So we need to make sure that we are able to attract those people. And we have another problem that I talked about with Justin Bons a bit yesterday, that we don’t want to leave all that to be fixed by a third party. It may seem nice, you know, so okay, I have a big company making good money, I’m gonna pay people working on the infrastructure for everybody. I’m gonna hire some old-time cypherpunk that became famous because he made a t-shirt about ERISA and i’m going to use that to promote my company and hire a bunch of developers and take care of the infrastructure for everybody. It’s all good people, we are very competent. And indeed they are very competent, but they don’t have your best interest in mind, they have their best interest in mind. And so they should, right? It’s not evil to have your own interest in mind, but you’ve got to remember that if you delegate that to others, they have their best interest in mind, they don’t have yours. So it’s very important that you have different actors that have different interests that get involved into that game of maintaining the infrastructure. So they can keep each other in check. And if you don’t quite understand the value proposition for you as a business who builds on top of BCH, the best way to explain that to whoever is doing the financials of your company is as an insurance policy. The point of the insurance on the building where your company is, or on the servers, is so that if everything burns down, you can get money to get your business started and don’t go under. Well this is the same thing. Your business relies on some infrastructure, and if this infrastructure ends up going down, disappearing, or being taken in a direction that doesn’t fit your business, your business is toast. And so you want to have an insurance policy there that insures that the pieces that you’re relying on are going to be there for you when you need them. Alright let’s take an example. In this example, I purposefully did not put any name because I don’t want to blame people. I want to use this as an example of a mistake that were made. I want you to understand that many other people have done many similar mistakes in that space, and so if all you take from what I’m saying here is like those people are bad and you should blame them, this is like completely the wrong stuff. But I also think it’s useful to have a real life example. So on September 1st, at the beginning of the week, we had a wave of spam that was broadcasted on the network. Someone made like a bunch of transactions, and those were very visibly transactions that were not there to actually do transactions, they were there just to create a bunch of load on the network and try to disturb its good behavior. And it turned out that most miners were producing blocks from 2 to 8 megabytes, while typical market demand is below half a megabyte, typically, and everything else above that was just spam, essentially. And if you ask any people that have experience in capacity planning, they are going to tell you that those limits are appropriate. The reason why, and the alternative to raising those limits that you can use to mitigate those side effects are a bit complicated and they would require a talk in and of itself to go into, so I’m going to just use an argument from authority here, but trust me, I know what I’m talking about here, and this is just like raising those limits is just not the solution. But some pool decided to increase that soft cap to 32 megs. And this has two main consequences that I want to dig in to explain what is not the right solution. And the first one is that we have businesses that are building on BCH today. And those businesses are the ones that are providing value, they are the ones making our network valuable. Right? So we need to treat those people as first class citizens. We need to attract and value them as much as we can. And those people, they find themselves in the position where they can either dedicate their resources and their attention and their time to make their service better and more valuable for users, or maybe expand their service to more countries, to more markets, to whatever, they can do a lot of stuff, or they can spend their time and resources to make sure the system works not when you have like 10x the usual load, but also 100x the usual load. And this is something that is not providing value to them, this is something that is not providing value to us, and I would even argue that this is something that is providing negative value. Because if those people don’t improve their service, or build new services, or expand their service to new markets, what’s going to happen is that we’re not going to do 100x. 100x happens because people provide useful services and people start using it. And if we distract those people so that they need to do random stuff that has nothing to do with their business, then we’re never going to do 100x. And so having a soft cap that is way way way above what is the usual market demand (32 megs is almost a hundred times what is the market demand for it), it’s actually a denial of service attack that you open for anyone that is building on the chain. We were talking before, like yesterday we were asking about how do we attract developers, and one of the important stuff is that we need to value that over valuing something else. And when we take this kind of move, the signal that we send to the community, to the people working on that, is that people yelling very loudly on social media, their opinion is more valued than your work to make a useful service building on BCH. This is an extremely bad signal to send. So we don’t want to send those kind of signals anymore. That’s the first order effect, but there’s a second order effect, and the second order effect is to scale we need people with experience in capacity planning. And as it turns out big companies like Google, and Facebook, and Amazon pay good money, they pay several 100k a year to people to do that work of capacity planning. And they wouldn’t be doing that if they just had to listen to people yelling on social media to find the answer. Right? It’s much cheaper to do the simple option, except the simple option is not very good because this is a very complex engineering problem. And not everybody is like a very competent engineer in that domain specifically. So put yourself in the shoes of some engineers who have skills in that particular area. They see that happening, and what do they see? The first thing that they see is that if they join that space, they’re going to have some level of competence, some level of skill, and it’s going to be ignored by the leaders in that space, and ignoring their skills is not the best way to value it as it turns out. And so because of that, they are less likely to join it. But there is a certain thing that they’re going to see. And that is that because they are ignored, some shit is going to happen, some stuff are going to break, some attacks are going to be made, and who is going to be called to deal with that? Well, it’s them. Right? So not only are they going to be not valued for their stuff, the fact that they are not valued for their stuff is going to put them in a situation where they have to put out a bunch of fires that they would have known to avoid in the first place. So that’s an extremely bad value proposition for them to go work for us. And if we’re going to be a world scale currency, then we need to attract those kinds of people. And so we need to have a better value proposition and a better signaling that we send to them. Alright, so that’s the end of the first infrastructure stuff. Now I want to talk about game theory a bit, and specifically, Schelling points. So what is a Schelling point? A Schelling point is something that we can agree on without especially talking together. And there are a bunch of Schelling points that exist already in the Bitcoin space. For instance we all follow the longest chain that have certain rules, right? And we don’t need to talk to each other. If I’m getting my wallet and I have some amount of money and I go to any one of you here and you check your wallet and you have that amount of money and those two amounts agree. We never talk to each other to come to any kind of agreement about how much each of us have in terms of money. We just know. Why? Because we have a Schelling point. We have a way to decide that without really communicating. So that’s the longest chain, but also all the consensus rules we have are Schelling points. So for instance, we accept blocks up to a certain size, and we reject blocks that are bigger than that. We don’t constantly talk to each other like, ‘Oh by the way do you accept 2 mb blocks?’ ‘Yeah I do.’ ‘Do you accept like 3 mb blocks? And tomorrow will you do that?’ We’re not doing this as different actors in the space, constantly worrying each other. We just know there is a block size that is a consensus rule that is agreed upon by almost everybody, and that’s a consensus rule. And all the other consensus rules are effectively changing Schelling points. And our role as a community is to create valuable Schelling points. Right? You want to have a set of rules that provide as much value as possible for different actors in the ecosystem. Because this is how we win. And there are two parts to that. Even though sometimes we look and it’s just one thing, but there are actually two things. The first one is that we need to decide what is a valuable Schelling point. And I think we are pretty good at this. And this is why we have a lot of utility and we have a very strong fundamental development. We are very good at choosing what is a good Schelling point. We are very bad at actually creating it and making it strong. So I’m going to talk about that. How do you create a new Schelling point. For instance, there was a block size, and we wanted a new block size. So we need to create a new Schelling point. How do you create a new Schelling point that is very strong? You need a commitment strategy. That’s what it boils down to. And the typical example that is used when discussing Schelling points is nuclear warfare. So think about that a bit. You have two countries that both have nuclear weapons. And one country sends a nuke on the other country. Destroys some city, whatever, it’s bad. When you look at it from a purely rational perspective, you will assume that people are very angry, and that they want to retaliate, right? But if you put that aside, there is actually no benefit to retaliating. It’s not going to rebuild the city, it’s not going to make them money, it’s not going to give them resources to rebuild it, it’s not going to make new friends. Usually not. It’s just going to destroy some stuff in the other guy that would otherwise not change anything because the other guys already did the damage to us. So if you want nuclear warfare to actually prevent war like we’ve seen mostly happening in the past few decades with the mutually assured destruction theory, you need each of those countries to have a very credible commitment strategy, which is if you nuke me, I will nuke you, and I’m committing to that decision no matter what. I don’t care if it’s good or bad for me, if you nuke me, I will nuke you. And if you can commit to that strongly enough so that it’s credible for other people, it’s most likely that they are not going to nuke you in the first place because they don’t want to be nuked. And it’s capital to understand that this commitment strategy, it’s actually the most important part of it. It’s not the nuke, it’s not any of it, it’s the commitment strategy. You have the right commitment strategy, you can have all the nuke that you want, it’s completely useless, because you are not deterring anyone from attacking you. There are many other examples, like private property. It’s something usually you’re going to be willing to put a little bit of effort to defend, and the effort is usually way higher than the value of the property itself. Because this is your house, this is your car, this is your whatever, and you’re pretty committed to it, and therefore you create a Schelling point over the fact that this is your house, this is your car, this is your whatever. People are willing to use violence and whatever to defend their property. This is effectively, even if you don’t do it yourself, this is what happens when you call the cops, right? The cops are like you stop violating that property or we’re going to use violence against you. So people are willing to use a very disproportionate response even in comparison to the value of the property. And this is what is creating the Schelling point that allows private property to exist. This is the commitment strategy. And so the longest chain is a very simple example. You have miners and what miners do when they create a new block, essentially they move from one Schelling point when a bunch of people have some amount of money, to a new Schelling point where some money has moved, and we need to agree to the new Schelling point. And what they do is that they commit a certain amount of resources to it via proof of work. And this is how they get us to pay attention to the new Schelling point. And so UASF is also a very good example of that where people were like we activate segwit no matter what, like, if it doesn’t pan out, we just like busted our whole chain and we are dead. Right? This is like the ultimate commitment strategy, as far as computer stuff is involved. It’s not like they actually died or anything, but as far as you can go in the computer space, this is very strong commitment strategy. So let me take an example that is fairly inconsequential in its consequences, but I think explains very well. The initial BCH ticker was BCC. I don’t know if people remember that. Personally I remember reading about it. It was probably when we created it with Jonald and a few other people. And so I personally was for XBC, but I went with BCC, and most people wanted BCC right? It doesn’t matter. But it turned out that Bitfinex had some Ponzi scheme already listed as BCC. It was Bitconnect, if you remember. Carlos Matos, you know, great guy, but Bitconnect was not exactly the best stuff ever, it was a Ponzi scheme. And so as a result Bitifnex decided to list Bitcoin Cash as BCH instead of BCC, and then the ball started rolling and now everybody uses BCH instead of BCC. So it’s not all that bad. The consequences are not that very bad. And I know that many of you are thinking that right now. Why is this guy bugging us about this? We don’t care if it’s BCC or BCH. And if you’re doing that, you are exactly proving my point. Because … there are people working for Bitcoin.com here right? Yeah, so Bitcoin.com is launching an exchange, or just has launched, it’s either out right now or it’s going to be out very soon. Well think about that. Make this thought experiment for yourself. Imagine that Bitcoin.com lists some Ponzi scheme as BTC, and then they decide to list Bitcoin as BTN. What do you think would be the reaction of the Bitcoin Core supporter? Would they be like, you know what? we don’t want to be confused with some Ponzi scheme so we’re going to change everything for BTN. No, they would torch down Roger Ver even more than they do now, they would torch down Bitcoin.com. They would insult anyone that would suggest that this was a good idea to go there. They would say that everyone that uses the stuff that is BTC that it’s a ponzi scheme, and that it’s garbage, and that if you even talk about it you are the scum of the earth. Right? They would be extremely committed to whatever they have. And I think this is a lesson that we need to learn from them. Because even though it’s a ticker, it’s not that important, it’s that attitude that you need to be committed to that stuff if you want to create a strong Schelling point, that allows them to have a strong Schelling point, and that does not allow us to have that strong of a Schelling point. Okay, so yesterday we had the talk by Justin Bons from Cyber Capital, and one of the first things he said in his talk, is that his company has a very strong position in BCH. And so that changed the whole tone of the talk. You gotta take him seriously because his money is where his mouth is. You know that he is not coming on the stage and telling you random stuff that comes from his mind or tries to get you to do something that he doesn’t try himself. That doesn’t mean he’s right. Maybe he’s wrong, but if he’s wrong, he’s going bankrupt. And you know just for that reason, maybe it’s worth it to listen to it a bit more than some random person saying random stuff when they have no skin in the game. And it makes him more of a leader in the space. Okay we have some perception in this space that we have a bunch of leaders, but many of them don’t have skin in the game. And it is very important that they do. So when there is some perceived weakness from BCH, if you act as an investor, you are going to diversify. If you act as a leader, you are going to fix that weakness. Right? And so, leaders, it’s not like you can come here and decide well, I’m a leader now. Leaders are leaders because people follow them. It seems fairly obvious, but … and you are the people following the leaders, and I am as well. We decide to follow the opinion of some people more than the opinion of others. And those are the defacto leaders of our community. And we need to make sure that those leaders that we have like Justin Bons, and make sure that they have a strong commitment to whatever they are leading you to, because otherwise you end up in this situation: https://preview.redd.it/r23dptfobcl31.jpg?width=500&format=pjpg&auto=webp&s=750fbd0f1dc0122d2791accc59f45a235a522444 Where you got a leader, he’s getting you to go somewhere, he has some goal, he has some whatever. In this case he is not that happy with the British people. But he’s like give me freedom or give me death, and he’s going to fight the British, but at the same time he’s like you know what? Maybe this shit isn’t gonna pan out, you gotta make sure you have your backup plan together, you have your stash of British pound here. You know, many of us are going to die, but that’s a sacrifice I’m willing to make. That’s not the leader that you want. I’m going to go to two more examples and then we’re going to be done with it. So one of them is Segwit 2x. Segwit 2x came with a time where some people wanted to do UASF. And UASF was essentially people that set up a modified version of their Bitcoin node that would activate segwit on August 1, no matter what. Right? No matter what miners do, no matter what other people do, it’s going to activate segwit. And either I’m going to be on the other fork, or I’m going to be alone and bust. Well, the alternative proposal was segwit 2x. Where people would activate segwit and then increase the size of the block. And what happened was that one of the sides had a very strong commitment strategy, and the other side, instead of choosing a proportional commitment strategy, what they did was that they modified the activation of segwit 2x to be compatible with UASF. And in doing so they both validate the commitment strategy done by the opposite side, and they weaken their own commitment strategy. So if you look at that, and you understand game theory a bit, you know what’s going to happen. Like the fight hasn’t even started and UASF has already won. And when I saw that happening, it was a very important development to me, because I have some experience in game theory, a lot of that, so I understood what was happening, and this is what led me to commit to BCH, which was BCC at the time, 100%. Because I knew segwit 2x was toast, even though it had not even started, because even though they had very strong cards, they are not playing their cards right, and if you don’t play your cards right, it doesn’t matter how strong your cards are. Okay, the second one is emergent consensus. And the reason I wanted to put those two examples here is because I think those are the two main examples that lead to the fact that BTC have small blocks and we have big blocks and we’re a minority chain. Those are like the two biggest opportunities we had to have big blocks on BTC and we blew both of them for the exact same reason. So emergent consensus is like an interesting technology that allows you to trade your bigger block without splitting the network. Essentially, if someone starts producing blocks that are bigger than … (video skips) ,,, The network seems to be following the chain that has larger blocks, eventually they’re going to fall back on that chain, and that’s a very clevery mechanism that allows you to make the consensus rules softer in a way, right? When everybody has the same consensus rules, it still remains enforced, but if a majority of people want to move to a new point, they can do so by bringing others with them without creating a fork. That is a very good activation mechanism for changing the block size, for instance, or it can be used to activate other stuff. There is a problem, though. This mechanism isn’t able to set a new point. It’s a way to activate a new Schelling point when you have one, but it provides no way to decide when and where or to what value or to anything to where we are going. So this whole strategy lacks the commitment aspect of it. And because it lacks the commitment aspect of it, it was unable to activate properly. It was good, but it was not sufficient in itself. It needs to be combined with a commitment strategy. And especially on that one there are some researchers that wrote a whole paper (https://eprint.iacr.org/2017/686.pdf) unpacking the whole game theory that essentially come to that conclusion that it’s not going to set a new size limit because it lacked the commitment aspect of it. But they go on like they model all the mathematics of it, they give you all the numbers, the probability, and the different scenarios that are possible. It’s a very interesting paper. If you want to see, like, because I’m kind of explaining the game theory from a hundred mile perspective, but actually you can deep dive into it and if you want to know the details, they are in there. People are doing that. This is an actual branch of mathematics. Alright, okay so conclusion. We must avoid to weaken our commitment strategy. And that means that we need to work in a way where first there is decentralization happening. Everybody has ideas, and we fight over them, we decide where we want to go, we put them on the roadmap, and once it’s on the roadmap, we need to commit to it. Because when people want to go like, ‘Oh this is decentralized’ and we do random stuff after that, we actually end up with decentralization, not decentralization in a cooperative manner, but like in an atomization manner. You get like all the atoms everywhere, we explode, we destroy ourself. And we must require a leader to have skin in the game, so that we make sure we have good leaders. I have a little schema to explain that. We need to have negotiations between different parties, and because there are no bugs, the negotiation can last for a long time and be tumultuous and everything, and that’s fine, that’s what decentralization is looking like at that stage, and that’s great and that makes the system strong. But then once we made a decision, we got to commit to it to create a new Schelling point. Because if we don’t, the new Schelling point is very weak, and we get decentralization in the form of disintegration. And I think we have not been very good to balance the two. Essentially what I would like for us to do going forward is encouraging as much as possible decentralization in the first form. But consider people who participate in the second form, as hostile to BCH, because their behavior is damaging to whatever we are doing. And they are often gonna tell you why we can’t do that because it’s permissionless and decentralized, and they are right, this is permissionless and decentralized, and they can do that. We don’t have to take it seriously. We can show them the door. And not a single person can do that by themself, but as a group, we can develop a culture where it’s the norm to do that. And we have to do that.”
Hi All, I am long-term Bitcoin enthusiast and a core developer of PascalCoin, an infinitely scalable and completely original cryptocurrency (https://www.pascalcoin.org). I am also the developer of BlockchainSQL.io, an SQL-backend for Bitcoin. I have been involved in Bitcoin community for a long time, and was a big supporter of hard-forking on Aug 1 2017 (https://redd.it/6i5qt1). Due to the recent alarming proposals and the method which they are being pushed, I feel I have a moral duty to speak out to warn against what could be fatal technical errors for BCH. As a full-time core developer at PascalCoin for last 18 months, I have dealt with DoS attacks, 51% attacks, timewarp attacks, mining centralisation attacks, out-of-consensus bugs, high-orphan rates and various other issues. Suffice to say, Layer-1 cryptocurrency development is hard and you don't really appreciate how fragile everything this until you work on a cryptocurrency codebase and manage a live mainnet (disclaimer: Albert Molina is main genius here, but it is a team effort). Infinite Block Size: I know there has been much discussion here about the safety of "big blocks", and I generally agree with those arguments. However, the analysis I've seen always assumes the attackers are economically rational actors. On that basis, yes, the laws of economics will incentivise miners to naturally regulate the size of minted blocks. However, this does not include "economically irrational actors" such as competing coins, governments, banks, etc. Allowing the natural limit of 32mb I think was a sensible move, but adding changes to the network protocol to allow 128mb blocks and then more, does not seem appropriate right now since:
Blocks are nowhere near the limit right now in BCH. There is plenty of time for security/technical/reliability analysis going forward. The BCH social contract has been established as "onchain" so the risk of a "Blockstream 1mb attack" arising again is far less than that of a serious network instability issue arising from some unknown attack exploiting 100mb blocks.
It makes much more sense to leave the blocksize at 32mb until blocks reach ~16mb at which point the technical, security and reliability issues can be better understood and a more informed decision can be made by the BCH community. Re-Enabling Opcodes: It's important to remember that these opcodes were disabled by Satoshi Nakamoto himself early on in the project due to ongoing bugs and instability arising out of the scripting engine (https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures). Later as the scripts became standardized, this issue was forgotten/abandoned since it would require a hard-fork to reactivate them and Core developers were against HF's. Personally, I think it's a good idea to re-enable them, but only after:
Transaction Malleability is fixed: transactions can be malleated through many areas, including the scripts. What are the consequences of malleability on smart-contract scripts that pay out money based on complex rules? If an attacker can flip a few opcodes, suddenly someone that shouldn't be paid may get paid? I'm not aware of any such attack right now, but in my professional opinion, I believe such an attack would be possible and would not be convinced otherwise until a thorough security analysis was performed.
Testnet release: given the new large attack surface this introduces (remember, Satoshi disabled them himself for a reason), it makes sense to do a testnet deployment of this feature for at least 3-6 months. This is common practice in cryptocurrency development.
Infinite Script Size: One of the proposals I've seen that compliments re-enabling opcodes is to enable unbounded script sizes. From local discussions I've had with people promoting this idea, the "belief" is that miners will auto-regulate these as well. However, this is unproven. Unbounded script-size introduce signficant attack-vectors in the areas of denial of service and stack/memory overflow (especially with all opcodes). One attack I can foresee here is the introduction of quadratic-hashing attack but inside a single transaction! You have to understand that Ethereum had this problem from the onset and this is why they introduced the concept of "GAS". CPU power is a limited resource and if you don't pay for it, it will be completely abused. From what I've seen, there is no equivalent to GAS inside this proposal. To understand the seriousness of this issue, think back to Ethereum's network instability before the DAO hacker. It went through many periods of DoS attacks as hackers cleverly found oversights in their opcode/EVM engine. This is a serious, proven and real-world attack-vector and not one to be "solved later". The BCH network could be brought to a grinding halt and easily with unbounded script sizes that do not pay any gas. Voting/Signaling/Testnet: Even at PascalCoin, we go through a process of voting to enable all changes (https://www.pascalcoin.org/voting). We are barely a 10mill mcap coin and yet show more discipline with Voting, well-defined PIP design guidelines and Testnet releases. There is no excuse for BCH! It is a multi-billion dollar network and changes of this magnitude cannot be released so recklessly in such short time-frames. I hope these comments are considered by stakeholders of BCH and the community at large. I am not a maximalist and support BCH, but the last week has revealed there is a serious technical void in BCH! The Bitcoin Core devs may not know much about economics, but they did know some things about security & reliability of cryptocurrency software. PLEASE REMEMBER THERE ARE EXTREMELY TALENTED AND VICIOUS ATTACKERS OUT THERE and you need to be very careful with changes of this magnitude.
So-called "Poison Blocks" (what Greg Maxwell called the "big block attack") are the way Bitcoin was designed to scale and the ONLY way it ever can
Sounds insane, right? Not if you realize Bitcoin works only because it is an economic system. Everything in Bitcoin that falls under the purview of cutthroat market competition works, and everything that doesn't, doesn't.
Security: miners compete ruthlessly on hashrate. This prevents 51% attacks. Security in Bitcoin is fully within the purview of cutthroat market competition, and the result is that it works and works excellently.
Networking: miners don't yet compete on networking to any great degree (Joannes Vermorel argues convincingly that the bandwidth and equipment requirements for even terabyte blocks are no great budgetary strain even for small miners). If they did, it would ensure they have the fat pipes needed for global scale, far in advance. The artificial blocksize cap is preventing networking from falling fully under the purview of cutthroat market competition, and therefore it doesn't fully work: we apparently (since some are balking at puny 128MB blocks) have laggard miners who have not upgraded to even mid-grade networking infrastructure or don't have the technical chops to do so. Removing the cap or raising it aggressively is the only way to incentivize miners to upgrade on an individual level (meaning, to avoid free riders; yes some proactive miners may upgrade early but it is a bad investment if the majority doesn't come along).
Node code: The apparent reliance on volunteer dev teams to supply node client code has effectively subsidized laggard miners in this area, keeping the node code from falling fully under the purview of cutthroat market competition, and as a result - surprise, surprise - the node code is insufficient and "lots of work is needed to get to 128MB."
The error here is this is seen as a reason not to lift the cap. "We cannot raise the cap or miners would be forced to do work!" This is stated un-ironically, with no awareness that some miners being left behind and some miners making it is exactly how Bitcoin always had to work. This is a cry to leave node code optimization out of the purview of cutthroat market competition, because apparently some believe that "cutthroat" has something to do with the result -- the kind of socialist mindset that thinks cutthroat competion among seatbelt makers would lead to seatbelts that kill you. Anyone who understands economics knows nothing could be further from the truth. The rallying cry of the Core-style socialist mentality is that "Node code is too important to be left to the market, we need good Samaritan devs to provide it for all miners so that no miner is left behind." The ultimate result of shielding men from the effects of folly, is to fill the world with fools. -Herbert Spencer Likewise, the ultimate result of shielding miners from their inability or unwilliness to suitably optimize their node software is to fill Bitcoin with unprofessional miners who can't take us to global adoption. Without the incentive to upgrade networking and codebase, Bitcoin lacks the crucial vetting process that Bitcoin need in order to distill miners into a long tail of professionals who have what it takes to ride this train all the way to a billion users, quickly and securely. I challenge anyone to describe how they think Bitcoin can professionalize as long as there remains an effective subsidy for laggard miners in the areas of networking and node optimization (not meaning protocol optimization, but rather things like parallel validation). As painful as it may seem, the only way Bitcoin scales is over the bankrupt shells of many miners who didn't have what it takes. The cruft cannot come along for the ride. This means orphan battles, even if just a little at a time. It means stress tests of rapidly increasing scale. While killing off too much hashpower too fast is in no one's interest (hahsrate gets too low), moving at a speed that is fast yet manageable by most big-league pros is. And really, the changes that need to be made aren't even reputed by anyone to be incredibly hard problems once you accept, as Satoshi did, that "it ends in datacentres and big server farms." The fact that people are still arguing against 128MB by referencing tests with laptop nodes suggests that's the real problem here. Core's full node religion still has sway, despite being manufactured from whole cloth. Also known as Blockstream Syndrome, as a play off Stockholm Syndrome (where captives begin to sympathize with their captors). Whatever the reasons given, critics of removing the cap invariably appeal to the infrastructure "not being ready" as if that were a bad thing. It's a good thing! First of all, if we were to wait for all miners to be ready, we would be waiting for far too long. The right approach, to be determined by the market, is to move ahead somewhere between when 51% are ready and say 90% are ready, which is exactly what we can expect to happen without a cap. The incentives are such that it it profitable to sheer away some laggard miners but not too many (as culling too many at a time leaves BCH open to hashpower attack by BTC miners; over the longer term though it incentivizes pros to enter and take the place of the failed miners, making BCH even more secure). Secondly, the idea of a monolithic "infrastructure" ignores the secret sauce that makes Bitcoin work: miners in competition. Some are expected to fail to be ready! If not, how can Bitcoin miners get any more professional? Only the removal or reformation of the laggards can ever ensure Bitcoin ends up with professional infrastructure. This vetting process is inevitable and essential, and it must apply to all aspects of Bitcoin that we want to see professionalized, including node software. Now leaving aside a miner filling his block with his own 0-fee transactions (which can be dealt with by other miners rejecting blocks with too many 0-fee txs of low coin age*), Greg Maxwell's "big block attack" where big miners try to terrorize smaller (less well capitalized) miners using oversized blocks that a sizable minority of the network can't handle due to their slow networking is in fact exactly how Bitcoin MUST scale. It's not an attack, it's a stress test, and one Bitcoin literally cannot scale without. What he called an attack is the solution to scaling, not any kind of problem. Stress tests are incentivized in Bitcoin as a way of calling the bluff of the lazy miners. You gamble some money on an "attack," see who the slowpokes are and take their block rewards for your own. No miners had the balls to do this so far, but they will soon or Bitcoin dies due to the halvings in a few more years, as fee volume won't sustain security. As big blockers said to Core, there no room for arbitrary "conservatism" in the face of an oncoming train. Finally, I leave you with a thought experiment. Imagine somehow the community of volunteer developers in Bitcoin was so incredibly generous that it offered all miners ASIC designs, mining pool software, and all manner hashing optimizations to the point that miners merely had to buy ASICs and plug them in with no need to understand anything at all, and no need to try innovating on their own with ASIC design since these incredibly skilled volunteers trumped everything they could possibly come up with. Now naturally this situation must eventually come to an end, as the real pros step in, like Samsung. With security thereby left out of the purview of cutthroat market competition, thanks to overweening volunteerism that continued for too long (no problem with volunteers at the start, just a child isn't born into the world an adult and needs parenting at first), these miners would be wholly unvetted, unprepared, unable to scale up their hashing operations and be obliterated by Samsung or maybe a government 51% attack to kill Bitcoin. The point here is there is a formative period, and then there is adulthood. Growing up is a process of relying less and less on handouts, being exposed more and more to the cutthroat realities of the world. When is Bitcoin going to grow up? The halvings place a time limit on Bitcoin's security, and overprotective parents (those who don't want to remove the cap) -- in an ostensible effort to be conservative -- may end up keeping Honeybadger holed up his figurative mom's basement too long for him to accomplish his mission. *and if your response is, "This doesn't exist yet in any clients," I think you have missed the point of this post: again, that's a good thing. Let miners who are too incompetent to figure out something that simple get sloughed away. Do we really want such sluggards? If so and you're a dev, volunteer some code to them. If not, try to get hired by them instead. I think the pay will be much better. And if your response is, "But that means some miners might get orphaned unexpectedly and cry foul," then once again I say, that's a good thing. Block creation is fundamentally a speculative process. In other words, it's a gamble, by design. It's a Keynesian beauty contest wherein each miner tries to mine the greediest block they can get away with while not upping their orphan risk appreciably. Messing around with low-coin-age 0-fee tx stuffing might get you orphaned, boo-hoo. Miners are under no obligation to tell other miners their standards for block beauty in advance, even though they typically have done so thus far. Miners are ALWAYS free to orphan a block for ANY reason. That they generally keep to consistent, well-broadcast rules is a courtesy, not a necessity. Preventing general assholery isn't necessarily best effected by being up-front about what you will punish, but even if it is, miners can do that, too (let them figure it out, as they do for hashpower -- unless you have a good argument for why there is no possible solution or the solution is necessary too hard for a professional organization to figure out in reasonable time; that's the bar for objection, not "well the volunteer dev code doesn't do this yet"). And if your response is, "That will increase the orphan rate," yes and orphans already happen routinely so it is certainly not any catastrophe. See it as a detox process. It might put some small strain on the network as the slowpokes and dickheads are smacked, but again miners still choose this level of orphaning as well by the same Keynesian-beauty-contest dynamic. Orphans are a key part of why Bitcoin works and why it can scale, but if the orphan rate would interfere with service too much (unlikely if you believe 0-conf works), that also gets taken into account in the beauty contest and gets balanced with the benefits of punishing bad behavior and the costs of stomaching the poison block. The offending miner can also be un-whitelisted, returned to rando-node status, but again why are we trying to coddle miners by coming up with their strategies for being better professionals for them? Hopefully it is clear by now that all such arguments are central planning, which is bad at least after an early parental phase which I think has long since passed its natural life.
https://codevalley.com/whitepaper.pdf This document treats Emergent coding from a philosophical perspective. It has a good introduction, description of the tech and is followed by two sections on justifications from the perspective of Fred Brooks No Silver Bullet criteria and an industrialization criteria.
Mark Fabbro's presentation from the Bitcoin Cash City Conference which outlines the motivation, basic mechanics, and usage of Bitcoin Cash in reproducing the industrial revolution in the software industry.
Building the Bitcoin Cash City presentation highlighting how the emergent coding group of companies fit into the adoption roadmap of North Queensland.
Forging Chain Metal by Paul Chandler CEO of Aptissio, one of startups in the emergent coding space and which secured a million in seed funding last year.
Bitcoin Cash App Exploration A series of Apps that are some of the first to be built by emergent coding and presented, and in the case of Cashbar, demonstrated at the conference.
How does Emergent Coding prevent developer capture? A developer's Agent does not know what project they are contributing to and is thus paid for the specific contribution. The developer is controlling the terms of the payment rather than the alternative, an employer with an employment agreement. Why does Emergent Coding use Bitcoin BCH?
Both emergent coding and Bitcoin BCH are decentralized: As emergent coding is a decentralized development environment consisting of Agents providing respective design services, each contract received by an agent requires a BCH payment. As Agents are hosted by their developer owners which may be residing in one of 150 countries, Bitcoin Cash - an electronic peer-to-peer electronic cash system - is ideal to include a developer regardless of geographic location.
Emergent coding will increase the value of the Bitcoin BCH blockchain: With EC, there are typically many contracts to build an application (Cashbar was designed with 10000 contracts or so). EC adoption will increase the value of the Bitcoin BCH blockchain in line with this influx of quality economic activity.
Emergent coding is being applied to BCH software first: One of the first market verticals being addressed with emergent coding is Bitcoin Cash infrastructure. We are already seeing quality applications created using emergent coding (such as the HULA, Cashbar, PH2, vending, ATMs etc). More apps and tools supporting Bitcoin cash will attract more merchants and business to BCH.
Emergent coding increases productivity: Emergent coding increases developer productivity and reduces duplication compared to other software development methods. Emergent coding can provide BCH devs with an advantage over other coins. A BCH dev productivity advantage will accelerate Bitcoin BCH becoming the first global currency.
Emergent coding produces higher quality binaries: Higher quality software leads to a more reliable network.
1. Who/what is Code Valley? Aptissio? BCH Tech Park? Mining and Server Complex? Code Valley Corp Pty Ltd is the company founded to commercialize emergent coding technology. Code Valley is incorporated in North Queensland, Australia. See https://codevalley.com Aptissio Australia Pty Ltd is a company founded in North Queensland and an early adopter of emergent coding. Aptissio is applying EC to Bitcoin BCH software. See https://www.aptissio.com Townsville Technology Precincts Pty Ltd (TTP) was founded to bring together partners to answer the tender for the Historic North Rail Yard Redevelopment in Townsville, North Queensland. The partners consist of P+I, Conrad Gargett, HF Consulting, and a self-managed superannuation fund(SMSF) with Code Valley Corp Pty Ltd expected to be signed as an anchor tenant. TTP answered a Townsville City Council (TCC) tender with a proposal for a AUD$53m project (stage 1) to turn the yards into a technology park and subsequently won the tender. The plan calls for the bulk of the money is to be raised in the Australian equity markets with the city contributing $28% for remediation of the site and just under 10% from the SMSF. Construction is scheduled to begin in mid 2020 and be competed two years later. Townsville Mining Pty Ltd was set up to develop a Server Complex in the Kennedy Energy Park in North Queensland. The site has undergone several studies as part of a due diligence process with encouraging results for its competitiveness in terms of real estate, power, cooling and data.
TM are presently in negotiations with the owners of the site and is presently operating under an NDA.
The business model calls for leasing "sectors" to mining companies that wish to mine allowing companies to control their own direction.
Since Emergent Coding uses the BCH rail, TM is seeking to contribute to BCH security with an element of domestic mining.
TM are working with American partners to lease one of the sectors to meet that domestic objective.
The site will also host Emergent Coding Agents and Code Valley and its development partners are expected to lease several of these sectors.
TM hopes to have the site operational within 2 years.
2. What programming language are the "software agents" written in. Agents are "built" using emergent coding. You select the features you want your Agent to have and send out the contracts. In a few minutes you are in possession of a binary ELF. You run up your ELF on your own machine and it will peer with the emergent coding and Bitcoin Cash networks. Congratulations, your Agent is now ready to accept its first contract. 3. Who controls these "agents" in a software project You control your own Agents. It is a decentralized development system. 4. What is the software license of these agents. Full EULA here, now. A license gives you the right to create your own Agents and participate in the decentralized development system. We will publish the EULA when we release the product. 5. What kind of software architecture do these agents have. Daemons Responding to API calls ? Background daemons that make remote connection to listening applications? Your Agent is a server that requires you to open a couple of ports so as to peer with both EC and BCH networks. If you run a BCH full node you will be familiar with this process. Your Agent will create a "job" for each contract it receives and is designed to operate thousands of jobs simultaneously in various stages of completion. It is your responsibility to manage your Agent and keep it open for business or risk losing market share to another developer capable of designing the same feature in a more reliable manner (or at better cost, less resource usage, faster design time etc.). For example, there is competition at every classification which is one reason emergent coding is on a fast path for improvement. It is worth reiterating here that Agents are only used in the software design process and do not perform any role in the returned project binary. 6. What is the communication protocol these agents use. The protocol is proprietary and is part of your license. 7. Are the agents patented? Who can use these agents? It is up to you if you want to patent your Agent the underlying innovation behind emergent coding is _feasible_ developer specialization. Emergent coding gives you the ability to contribute to a project without revealing your intellectual property thus creating prospects for repeat business; It renders software patents moot. Who uses your Agents? Your Agents earn you BCH with each design contribution made. It would be wise to have your Agent open for business at all times and encourage everyone to use your design service. 8. Do I need to cooperate with Code Valley company all of the time in order to deploy Emergent Coding on my software projects, or can I do it myself, using documentation? It is a decentralized system. There is no single point of failure. Code Valley intends to defend the emergent coding ecosystem from abuse and bad actors but that role is not on your critical path. 9. Let's say Electron Cash is an Emergent Coding project. I have found a critical bug in the binary. How do I report this bug, what does Jonald Fyookball need to do, assuming the buggy component is a "shared component" puled from EC "repositories"? If you built Electron Cash with emergent coding it will have been created by combining several high level wallet features designed into your project by their respective Agents. Obviously behind the scenes there are many more contracts that these Agents will let and so on. For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto. Should one of these 10,000 make a design error, Jonald only sees the high level Agents he contracted. He can easily pinpoint which of these contractors are in breach. Similarly this contractor can easily pinpoint which of its sub-contractors is in breach and so on. The offender that breached their contract wherever in the project they made their contribution, is easily identified. For example, when my truck has a warranty problem, I do not contact the supplier of the faulty big-end bearing, I simply take it back to Mazda who in turn will locate the fault. Finally "...assuming the buggy component is a 'shared component' puled from EC 'repositories'?" - There are no repositories or "shared component" in emergent coding. 10. What is your licensing/pricing model? Per project? Per developer? Per machine? Your Agent charges for each design contribution it makes (ie per contract). The exact fee is up to you. The resulting software produced by EC is unencumbered. Code Valley's pricing model consists of a seat license but while we are still determining the exact policy, we feel the "Valley" (where Agents advertise their wares) should charge a small fee to help prevent gaming the catalogue and a transaction fee to provide an income in proportion to operations. 11. What is the basic set of applications I need in order to deploy full Emergent Coding in my software project? What is the function of each application? Daemons, clients, APIs, Frontends, GUIs, Operating systems, Databases, NoSQLs? A lot of details, please. There's just one. You buy a license and are issued with our product called Pilot. You run Pilot (node) up on your machine and it will peer with the EC and BCH networks. You connect your browser to Pilot typically via localhost and you're in business. You can build software (including special kinds of software like Agents) by simply combining available features. Pilot allows you to specify the desired features and will manage the contracts and decentralized build process. It also gives you access to the "Valley" which is a decentralized advertising site that contains all the "business cards" of each Agent in the community, classified into categories for easy search. If we are to make a step change in software design, inventing yet another HLL will not cut it. As Fred Brooks puts it, an essential change is needed. 12. How can I trust a binary when I can not see the source? The Emergent Coding development model is very different to what you are use to. There are ways of arriving at a binary without Source code. The Agents in emergent coding design their feature into your project without writing code. We can see the features we select but can not demonstrate the source as the design process doesn't use a HLL. The trust model is also different. The bulk of the testing happens _before_ the project is designed not _after_. Emergent Coding produces a binary with very high integrity and arguably far more testing is done in emergent coding than in incumbent methods you are used to. In emergent coding, your reputation is built upon the performance of your Agent. If your Agent produces substandard features, you are simply creating an opportunity for a competitor to increase their market share at your expense. Here are some points worth noting regarding bad actor Agents:
An Agent is a specialist and in emergent coding is unaware of the project they are contributing to. If you are a bad actor, do you compromise every contract you receive? Some? None?
Your client is relying on the quality of your contribution to maintain their own reputation. Long before any client will trust your contributions, they will have tested you to ensure the quality is at their required level. You have to be at the top of your game in your classification to even win business. This isn't some shmuck pulling your routine from a library.
Each contract to your agent is provisioned. Ie you advertise in advance what collaborations you require to complete your design. There is no opportunity for a "sign a Bitcoin transaction" Agent to be requesting "send an HTTP request" collaborations.
Your Agent never gets to modify code, it makes a design contribution rather than a code contribution. There is no opportunity to inject anything as the mechanism that causes the code to emerge is a higher order complexity of all Agent involvement.
There is near perfect accountability in emergent coding. You are being contracted and paid to do the design. Every project you compromise has an arrow pointed straight at you should it be detected even years later.
Security is a whole other ball game in emergent coding and current rules do not necessarily apply. 13. Every time someone rebuilds their application, do they have to pay over again for all "design contributions"? (Or is the ability to license components at fixed single price for at least a limited period or even perpetually, supported by the construction (agent) process?) You are paying for the design. Every time you build (or rebuild) an application, you pay the developers involved. They do not know they are "rebuilding". This sounds dire but its costs far less than you think and there are many advantages. Automation is very high with emergent coding so software design is completed for a fraction of the cost of incumbent design methods. You could perhaps rebuild many time before matching incumbent methods. Adding features is hard with incumbent methods "..very few late-stage additions are required before the code base transforms from the familiar to a veritable monster of missed schedules, blown budgets and flawed products" (Brooks Jr 1987) whereas with emergent coding adding a late stage feature requires a rebuild and hence seamless integration. With Emergent Coding, you can add an unlimited number of features without risking the codebase as there isn't one. The second part of your question incorrectly assumes software is created from licensed components rather than created by paying Agents to design features into your project without any licenses involved. 14. In this construction process, is the vendor of a particular "design contribution" able to charge differential rates per their own choosing? e.g. if I wanted to charge a super-low rate to someone from a 3rd world country versus charging slightly more when someone a global multinational corporation wants to license my feature? Yes. Developers set the price and policy of their Agent's service. The Valley (where your Agent is presently advertised) presently only supports a simple price policy. The second part of your question incorrectly assumes features are encumbered with licenses. A developer can provide their feature without revealing their intellectual property. A client has the right to reuse a developer's feature in another project but will find it uneconomical to do so. 15. Is "entirely free" a supported option during the contract negotiation for a feature? Yes. You set the price of your Agent. 16. "There is no single point of failure." Right now, it seems one needs to register, license the construction tech etc. Is that going to change to a model where your company is not necessarily in that loop? If not, don't you think that's a single point of failure? It is a decentralized development system. Once you have registered you become part of a peer-to-peer system. Code Valley has thought long and hard about its role and has chosen the reddit model. It will set some rules for your participation and will detect or remove bad actors. If, in your view, Code Valley becomes a bad actor, you have control over your Agent, private keys and IP, you can leave the system at any time. 17. What if I can't obtain a license because of some or other jurisdictional problem? Are you allowed to license the technology to anywhere in the world or just where your government allows it? We are planning to operate in all 150 countries. As ec is peer-to-peer, Code Valley does not need to register as a digital currency exchange or the like. Only those countries banning BCH will miss out (until such times as BCH becomes the first global electronic cash system). 18.
For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto.
It seems already a reasonably complex application, so well done in having that as a demo. Thank you. 19. I asked someone else a question about how it would be possible to verify whether an application (let's say one received a binary executable) has been built with your system of emergent consensus. Is this possible? Yes of course. If you used ec to build an application, you can sign it and claim anything you like. Your client knows it came from you because of your signature. The design contributions making up the application are not signed but surprisingly there is still perfect accountability (see below). 20. I know it is possible to identify for example all source files and other metadata (like build environment) that went into constructing a binary, by storing this data inside an executable. All metadata emergent coding is now stored offline. When your Agent completes a job, you have a log of the design agreements you made with your peers etc., as part of the log. If you are challenged at a later date for breaching a design contract, you can pull your logs to see what decisions you made, what sub-contracts were let etc. As every Agent has their own logs, the community as a whole has a completely trustless log of each project undertaken. 21. Is this being done with EC build products and would it allow the recipient to validate that what they've been provided has been built only using "design contributions" cryptographically signed by their providers and nothing else (i.e. no code that somehow crept in that isn't covered by the contracting process)? The emergent coding trust model is very effective and has been proven in other industries. Remember, your Agent creates a feature in my project by actually combining smaller features contracted from other Agents, thus your reputation is linked to that of your suppliers. If Bosch makes a faulty relay in my Ford, I blame Ford for a faulty car not Bosch when my headlights don't work. Similarly, you must choose and vet your sub-contractors to the level of quality that you yourself want to project. Once these relationships are set up, it becomes virtually impossible for a bad actor to participate in the system for long or even from the get go. 22. A look at code generated and a surprising answer to why is every intermediate variable spilled? Thanks to u/R_Sholes, this snippet from the actual code for: number = number * 10 + digitgenerated as a part of: sub read/integeboolean($, 0, 100) -> guess
; copy global to local temp variable 0x004032f2 movabs r15, global.current_digit 0x004032fc mov r15, qword [r15] 0x004032ff mov rax, qword [r15] 0x00403302 movabs rdi, local.digit 0x0040330c mov qword [rdi], rax ; copy global to local temp variable 0x0040330f movabs r15, global.guess 0x00403319 mov r15, qword [r15] 0x0040331c mov rax, qword [r15] 0x0040331f movabs rdi, local.num 0x00403329 mov qword [rdi], rax ; multiply local variable by constant, uses new temp variable for output 0x0040332c movabs r15, local.num 0x00403336 mov rax, qword [r15] 0x00403339 movabs rbx, 10 0x00403343 mul rbx 0x00403346 movabs rdi, local.num_times_10 0x00403350 mov qword [rdi], rax ; add local variables, uses yet another new temp variable for output 0x00403353 movabs r15, local.num_times_10 0x0040335d mov rax, qword [r15] 0x00403360 movabs r15, local.digit 0x0040336a mov rbx, qword [r15] 0x0040336d add rax, rbx 0x00403370 movabs rdi, local.num_times_10_plus_digit 0x0040337a mov qword [rdi], rax ; copy local temp variable back to global 0x0040337d movabs r15, local.num_times_10_plus_digit 0x00403387 mov rax, qword [r15] 0x0040338a movabs r15, global.guess 0x00403394 mov rdi, qword [r15] 0x00403397 mov qword [rdi], rax For comparison, an equivalent snippet in C compiled by clang without optimizations gives this output: imul rax, qword ptr [guess], 10 add rax, qword ptr [digit] mov qword ptr [guess], rax
Collaborations at the byte layer of Agents result in designs that spill every intermediate variable. Firstly, why this is so? Agents from this early version only support one catch-all variable design when collaborating. Similar to a compiler when all registers contain variables, the compiler must make a decision to spill a register temporarily to main memory. The compiler would still work if it spilled every variable to main memory but would produce code that would be, as above, hopelessly inefficient. However, by only supporting the catch-all portion of the protocol, the code valley designers were able to design, build and deploy these agents faster because an Agent needs fewer predicates in order to participate in these simpler collaborations. The protocol involved however, can have many "Policies" besides the catch-all default policy (Agents can collaborate over variables designed to be on the stack, or, as is common for intermediate variables, designed to use a CPU register, and so forth). This example highlights one of the very exciting aspects of emergent coding. If we now add a handful of additional predicates to a handful of these byte layer agents, henceforth ALL project binaries will be 10x smaller and 10x faster. Finally, there can be many Agents competing for market share at each of classification. If these "gumby" agents do not improve, you can create a "smarter" competitor (ie with more predicates) and win business away from them. Candy from a baby. Competition means the smartest agents bubble to the top of every classification and puts the entire emergent coding platform on a fast path for improvement. Contrast this with incumbent libraries which does not have a financial incentive to improve. Just wait until you get to see our production system. 23. How hard can an ADD Agent be? Typically an Agent's feature is created by combining smaller features from other Agents. The smallest features are so devoid of context and complexity they can be rendered by designing a handful of bytes in the project binary. Below is a description of one of these "byte" layer Agents to give you an idea how they work. An "Addition" Agent creates the feature of "adding two numbers" in your project (This is an actual Agent). That is, it contributes to the project design a feature such that when the project binary is delivered, there will be an addition instruction somewhere in it that was designed by the contract that was let to this Agent. If you were this Agent, for each contract you received, you would need to collaborate with peers in the project to resolve vital requirements before you can proceed to design your binary "instruction". Each paid contract your Agent receives will need to participate in at least 4 collaborations within the design project. These are:
Input A collaboration
Input B collaboration
Construction site collaboration
You can see from the collaborations involved how your Agent can determine the precise details needed to design its instruction. As part of the contract, the Addition Agent will be provisioned with contact details so it can join these collaborations. Your Agent must collaborate with other stakeholders in each collaboration to resolve that requirement. In this case, how a variable will be treated. The stakeholders use a protocol to arrive at an Agreement and share the terms of the agreement. For example, the stakeholders of collaboration “Input A” may agree to treat the variable as an signed 64bit integer, resolve to locate it at location 0x4fff2, or alternatively agree that the RBX register should be used, or agree to use one of the many other ways a variable can be represented. Once each collaboration has reached an agreement and the terms of that agreement distributed, your Agent can begin to design the binary instruction. The construction site collaboration is where you will exactly place your binary bytes. The construction site protocol is detailed in the whitepaper and is some of the magic that allows the decentralized development system to deliver the project binary. The protocol consists of 3 steps,
You request space in the project binary be reserved.
You are notified of the physical address of your requested space.
You delver the the binary bytes you designed to fill the reserved space.
Once the bytes are returned your Agent can remove the job from its work schedule. Job done, payment received, another happy customer with a shiny ADD instruction designed into their project binary. Note:
Observe how it is impossible for this ADD Agent to install a backdoor undetected by the client.
Observe how the Agent isn’t linking a module, or using a HLL to express the binary instruction.
Observe how with just a handful of predicates you have a working "Addition" Agent capable of designing the Addition Feature into a project with a wide range of collaboration agreements.
Observe how this Agent could conceivably not even design-in an ADD instruction if one of the design time collaboration agreements was a literal "1" (It would design in an increment instruction). There is even a case where this Agent may not deliver any binary to build its feature into your project!
24. How does EC arrive at a project binary without writing source code? Devs using EC combine features to create solutions. They don't write code. EC devs contract Agents which design the desired features into their project for a fee. Emergent coding uses a domain specific contracting language (called pilot) to describe the necessary contracts. Pilot is not a general purpose language. As agents create their features by similarly combining smaller features contracted from peer, your desired features may inadvertently result in thousands of contracts. As it is agents all the way down, there is no source code to create the project binary. Traditional: Software requirements -> write code -> compile -> project binary (ELF). Emergent coding: Select desired features -> contract agents -> project binary (ELF). Agents themselves are created the same way - specify the features you want your agent to have, contract the necessary agents for those features and viola - agent project binary (ELF). 25. How does the actual binary code that agents deliver to each other is written? An agent never touches code. With emergent coding, agents contribute features to a project, and leave the project binary to emerge as the higher-order complexity of their collective effort. Typically, agents “contribute” their feature by causing smaller features to be contributed by peers, who in turn, do likewise. By mapping features to smaller features delivered by these peers, agents ensure their feature is delivered to the project without themselves making a direct code contribution. Peer connections established by these mappings serve to both incrementally extend a temporary project “scaffold” and defer the need to render a feature as a code contribution. At the periphery of the scaffold, features are so simple they can be rendered as a binary fragment with these binary fragments using the information embodied by the scaffold to guide the concatenation back along the scaffold to emerge as the project binary - hence the term Emergent Coding. Note the scaffold forms a temporary tree-like structure which allows virtually all the project design contracts to be completed in parallel. The scaffold also automatically limits an agent's scope to precisely the resources and site for their feature. It is why it is virtually impossible for an agent to install a "back door" or other malicious code into the project binary.
The biggest announcement of the month was the new kind of decentralized exchange proposed by @jy-p of Company 0. The Community Discussions section considers the stakeholders' response. dcrd: Peer management and connectivity improvements. Some work for improved sighash algo. A new optimization that gives 3-4x faster serving of headers, which is great for SPV. This was another step towards multipeer parallel downloads – check this issue for a clear overview of progress and planned work for next months (and some engineering delight). As usual, codebase cleanup, improvements to error handling, test infrastructure and test coverage. Decrediton: work towards watching only wallets, lots of bugfixes and visual design improvements. Preliminary work to integrate SPV has begun. Politeia is live on testnet! Useful links: announcement, introduction, command line voting example, example proposal with some votes, mini-guide how to compose a proposal. Trezor: Decred appeared in the firmware update and on Trezor website, currently for testnet only. Next steps are mainnet support and integration in wallets. For the progress of Decrediton support you can track this meta issue. dcrdata: Continued work on Insight API support, see this meta issue for progress overview. It is important for integrations due to its popularity. Ongoing work to add charts. A big database change to improve sorting on the Address page was merged and bumped version to 3.0. Work to visualize agenda voting continues. Ticket splitting: 11-way ticket split from last month has voted (transaction). Ethereum support in atomicswap is progressing and welcomes more eyeballs. decred.org: revamped Press page with dozens of added articles, and a shiny new Roadmap page. decredinfo.com: a new Decred dashboard by lte13. Reddit announcement here. Dev activity stats for June: 245 active PRs, 184 master commits, 25,973 added and 13,575 deleted lines spread across 8 repositories. Contributions came from 2 to 10 developers per repository. (chart)
Hashrate: growth continues, the month started at 15 and ended at 44 PH/s with some wild 30% swings on the way. The peak was 53.9 PH/s. F2Pool was the leader varying between 36% and 59% hashrate, followed by coinmine.pl holding between 18% and 29%. In response to concerns about its hashrate share, F2Pool made a statement that they will consider measures like rising the fees to prevent growing to 51%. Staking: 30-day average ticket price is 94.7 DCR (+3.4). The price was steadily rising from 90.7 to 95.8 peaking at 98.1. Locked DCR grew from 3.68 to 3.81 million DCR, the highest value was 3.83 million corresponding to 47.87% of supply (+0.7% from previous peak). Nodes: there are 240 public listening and 115 normal nodes per dcred.eu. Version distribution: 57% on v1.2.0 (+12%), 25% on v1.1.2 (-13%), 14% on v1.1.0 (-1%). Note: the reported count of non-listening nodes has dropped significantly due to data reset at decred.eu. It will take some time before the crawler collects more data. On top of that, there is no way to exactly count non-listening nodes. To illustrate, an alternative data source, charts.dcr.farm showed 690 reachable nodes on Jul 1. Extraordinary event: 247361 and 247362 were two nearly full blocks. Normally blocks are 10-20 KiB, but these blocks were 374 KiB (max is 384 KiB).
Update from Obelisk: shipping is expected in first half of July and there is non-zero chance to meet hashrate target. Another Chinese ASIC spotted on the web: Flying Fish D18 with 340 GH/s at 180 W costing 2,200 CNY (~340 USD). (asicok.com – translated, also on asicminervalue) dcrASIC team posted a farewell letter. Despite having an awesome 16 nm chip design, they decided to stop the project citing the saturated mining ecosystem and low profitability for their potential customers.
Changenow announced the option to buy DCR with fiat.
TokenPride: "We are seeking feedback on the general setup of our payment processor. We have tried to make it simple and user friendly. 10% of all purchases made in Decred will be donated to the Decred Development fund - and we will be releasing original Decred designs in the future".
BlueYard Capital announced investment in Decred and the intent to be long term supporters and to actively participate in the network's governance. In an overview post they stressed core values of the project:
There are a few other remarkable characteristics that are a testament to the DNA of the team behind Decred: there was no sale of DCR to investors, no venture funding, and no payment to exchanges to be listed – underscoring that the Decred team and contributors are all about doing the right thing for long term (as manifested in their constitution for the project). The most encouraging thing we can see is both the quality and quantity of high calibre developers flocking to the project, in addition to a vibrant community attaching their identity to the project.
The company will be hosting an event in Berlin, see Events below. Arbitrade is now mining Decred.
Campus Party in Brasilia, Brazil. @girino, @Rhama and @matheusd talked about Decred. Matheus was interviewed by a TV channel. Check this quick report about the event, click "Show newer" to continue reading. (photos: 123)
Blockchain Summit in London, UK. This was not a full blown presence with stand but rather investigation of opportunities by @kyle and @Ani. The resulting detailed report is a good example of a document advising to stakeholders whether it is worth spending project funds.
Meetup in Berlin, Germany on July 18. @jz will give a talk and Q&A about Decred and chat with Ele from @oscoin about incentivizing developers. Hosted by BlueYard Capital.
Hey guys! I'd like to share with you my latest adventure: Stakey Club, hosted at stakey.club, is a website dedicated to Decred. I posted a few articles in Brazilian Portuguese and in English. I also translated to Portuguese some posts from the Decred Blog. I hope you like it! (slack)
Decred Assembly - Ep20 - Governance: Driving the Future (youtube) @cburniske and @traceagain discuss the importance of governance protocols being foundational and problems with delegated proof of stake
"I think that developers in the future are going to base their decision on where to build on the basis of governance and community. And so I look for good governance mechanisms and strong communities in blockchains." (@decredproject)
What is on-chain cryptocurrency governance? Is it plutocratic? by Richard Red (medium)
Apples to apples, Decred is 20x more expensive to attack than Bitcoin by Zubair Zia (medium)
What makes Decred different and better from other cryptocurrencies? (cxihub.com)
Community stats: Twitter followers 40,209 (+1,091), Reddit subscribers 8,410 (+243), Slack users 5,830 (+172), GitHub 392 stars and 918 forks of dcrd repository. An update on our communication systems:
Matrix chat logs are nowviewable on the web with the exception of some channels that are not bridged. The new web logs means our chats are now fully public and indexed by search engines.
Slack had an outage on Jun 27 that disturbed communications for a few hours, discussions continued on Decred's bridged platforms.
Jake Yocom-Piatt did an AMA on CryptoTechnology, a forum for serious crypto tech discussion. Some topics covered were Decred attack cost and resistance, voting policies, smart contracts, SPV security, DAO and DPoS. A new kind of DEX was the subject of an extensive discussion in #general, #random, #trading channels as well as Reddit. New channel #thedex was created and attracted more than 100 people. A frequent and fair question is how the DEX would benefit Decred. @lukebp has put it well:
Projects like these help Decred attract talent. Typically, the people that are the best at what they do aren’t driven solely by money. They want to work on interesting projects that they believe in with other talented individuals. Launching a DEX that has no trading fees, no requirement to buy a 3rd party token (including Decred), and that cuts out all middlemen is a clear demonstration of the ethos that Decred was founded on. It helps us get our name out there and attract the type of people that believe in the same mission that we do. (slack)
Another concern that it will slow down other projects was addressed by @davecgh:
The intent is for an external team to take up the mantle and build it, so it won't have any bearing on the current c0 roadmap. The important thing to keep in mind is that the goal of Decred is to have a bunch of independent teams on working on different things. (slack)
A chat about Decred fork resistance started on Twitter and continued in #trading. Community members continue to discuss the finer points of Decred's hybrid system, bringing new users up to speed and answering their questions. The key takeaway from this chat is that the Decred chain is impossible to advance without votes, and to get around that the forker needs to change the protocol in a way that would make it clearly not Decred. "Against community governance" article was discussed on Reddit and #governance. "The Downside of Democracy (and What it Means for Blockchain Governance)" was another article arguing against on-chain governance, discussed here. Reddit recap: mining rig shops discussion; how centralized is Politeia; controversial debate on photos of models that yielded useful discussion on our marketing approach; analysis of a drop in number of transactions; concerns regarding project bus factor, removing central authorities, advertising and full node count – received detailed responses; an argument by insette for maximizing aggregate tx fees; coordinating network upgrades; a new "Why Decred?" thread; a question about quantum resistance with a detailed answer and a recap of current status of quantum resistant algorithms. Chats recap: Programmatic Proof-of-Work (ProgPoW) discussion; possible hashrate of Blake-256 miners is at least ~30% higher than SHA-256d; how Decred is not vulnerable to SPV leaf/node attack.
DCR opened the month at ~$93, reached monthly high of $110, gradually dropped to the low of $58 and closed at $67. In BTC terms it was 0.0125 -> 0.0150 -> 0.0098 -> 0.0105. The downturn coincided with a global decline across the whole crypto market. In the middle of the month Decred was noticed to be #1 in onchainfx "% down from ATH" chart and on this chart by @CoinzTrader. Towards the end of the month it dropped to #3.
Please note: we will not accept any kind of payment to list an asset.
Bithumb got hacked with a $30 m loss. Zcash organized Zcon0, an event in Canada that focused on privacy tech and governance. An interesting insight from Keynote Panel on governance: "There is no such thing as on-chain governance". Microsoft acquired GitHub. There was some debate about whether it is a reason to look into alternative solutions like GitLab right now. It is always a good idea to have a local copy of Decred source code, just in case. Status update from @sumiflow on correcting DCR supply on various sites:
To begin with, none of the below sites were showing the correct supply or market cap for Decred but we've made some progress. coingecko.com, coinlib.io, cryptocompare.com, livecoinwatch.com, worldcoinindex.com - corrected! cryptoindex.co, onchainfx.com - awaiting fix coinmarketcap.com - refused to fix because devs have coins too? (slack)
About This Issue
This is the third issue of Decred Journal after April and May. Most information from third parties is relayed directly from source after a minimal sanity check. The authors of Decred Journal have no ability to verify all claims. Please beware of scams and do your own research. The new public Matrix logs look promising and we hope to transition from Slack links to Matrix links. In the meantime, the way to read Slack links is explained in the previous issue. As usual, any feedback is appreciated: please comment on Reddit, GitHub or #writers_room. Contributions are welcome too, anything from initial collection to final review to translations. Credits (Slack names, alphabetical order): bee and Richard-Red. Special thanks to @Haon for bringing May 2018 issue to medium.
Since our last newsletter, we have started open-sourcing our networking stack and exploring strategic partnership. Here’re the highlights: Started to open source our codebase & a new umbrella project called libunison, Hired 2 new teammates & started targeting 100+ strategic partners, Submitted an arxiv preprint of neuroscience paper & updated our testnet architecture, Continued growing TGI community & conducting 10+ podcast interviews. ￼ Open source & networking with libunison We have identified data availability and block propagation to be the main bottlenecks of scaling transactions with tens of thousands of (some potentially malicious) nodes over the Internet. Our insight is to use the RaptorQ fountain code in conjunction with a forward error correction scheme for broadcasting message blocks, without incurring round-trip delays to recover from packet losses, over adversarial networks. Here we’re launching our open source effort github.com/harmony-one with a go-raptorq wrapper under our umbrella project libunison (see our roadmap). Our libunison is an end-to-end and peer-to-peer networking library for any application that needs to self-organize an emerging network of nodes. The library is built upon existing standardized technologies, including Host Identity Protocol (HIPv2) and Encryption Security Payload (ESP), to leverage decades of research, development and deployment insights. Harmony is open sourcing libunison as one of the foundational layers of not only our network but also other performant, decentralized networks such as peer multicasts. 2 new teammates & 100+ strategic partners Our team is growing! Chao Ma (Amazon AI engineer, Math Ph.D. at CU Boulder, non-linear analysis researcher) is joining the team to tackle protocol research and statistical consensus. Chao has been researching blockchain algorithms since 2017 and recently implemented a toy IPFS for fun. So did our good friend Li Jiang (GSV Capital, logistics startup founder, Northwestern University adjunct, nickname 蒋·和梦·犁). Li has been our evangelist since our first China trip in February and finally decided to jump off the cliff to lead Harmony’s partnership efforts full time. As the newest node with awe on the Harmony team, Li also serves as “Chief Frisbee Officer”to keep us active in the winter. We are planning our second token sale. Inspired by these insightful articles by Multicoin and by Notation on value-adding investors as operators, we are asking our new investors to operate Harmony nodes. Scalability and decentralization are the two most important metrics for Harmony to succeed. We will achieve both by having tens of thousands of nodes, the scale of Bitcoin and Ethereum, run by many independent entities in jurisdictions all over the world. Having many nodes is key to network performance with our sharding approach; meanwhile having independent entities is key to network securitywith our permissionless principle. If you are non-US based and looking to participate in this strategic round, contact us at harmony.one/partners. Neuroscience preprint & testnet architecture Our colleague Prof. Lau has led our team with his research and submitted a paper “Blockchain and human episodic memory” (see preprint on arxiv) on relating brain consciousness to blockchain consensus. We highlight that certain phenomena studied in the brain, namely metacognition, reality monitoring, and how perceptual conscious experiences come about, may inspire development in blockchain technology too, specifically regarding probabilistic consensus protocols. Our colleague Ka-yuet Liu, also at UCLA, has published Data Marketplace for Scientists in our blog. She highlights a modern economic theory of the nonrivalry of data, concluding that “blockchain can turn wasteful competition between large-scale science projects into synergy” among internationally recognized scientists like themselves. Our testnet architecture has been updated to apply the latest research results and progress made by Ethereum 2.0. Zero-knowledge proofs by Starkwareare now fast enough to be generated on mobile clients and may be used to scale blockchains by many orders of magnitude. Fraud proofs (with 2D erasure codes and interleaved sampling), stateless clients (with algebraic vector commitment), comparing synchronous (with 10 exact round complexity vs previously 29) versus partially synchronous protocols, and integrating 99% fault tolerance (with hybrid threshold-dependent and latency-dependentconsensus) are on our roadmap. Growing TGI community & 10+ interviews Early this month, we had an in-depth founder interview with Hacker Noon, one of the most-read publications among engineers and entrepreneurs. On the topic of attracting users and building communities, we answered “some conversations are multiplicative — we multiply each other’s dreams. And every once in a while, a conversation is exponential, meaning we really build deep belief in each other’s vision and can make it come to life.” Furthermore, Spencer writes about the Future of Scalable Blockchain and compares Harmony to Ethereum 2.0, Dfinity, Cardano and Nervos, complimenting that Harmony’s approach “is highly cerebral and in tune with the best technology currently available… their spirit of inclusion and entrepreneurship feels a bit more sincere.” ￼ We continue to engage a global community to share the Harmony story. Here are just a few podcast interviewers and writers we are engaging with. Be sure to check out their work and keep an eye out as our stories will be published soon. Our conversations with these influencers span Silicon Valley, China, SE Asia, India, Australia and Brazil this month. Thanks to Jon Victor from The Information, Joyce Yang from Global Coin Research, Tushar Aggarwal from LunexVC & DecryptAsia, Brad Laurie also known as BlockchainBrad and Gerson Ribeiro from Startup de Alto Impacto for sharing our journey. We are also hosting TGI-Blockchain on Saturdays now from 12pm to 4pm at our home-office for fellow founders and collaborators to deeply engage with each other. We are inspired by these builders presenting their works (sign up here!) every Saturday, including Timeless Protocol, Rational Mind, Blue Vista and Tara.AI in recent weeks. Our team is sharing our learnings globally at a recent talk in India and upcoming events in Hong Kong and online with TokenGazer, as well as meeting our local friends from the ABC Blockchain and Xoogler communities. ￼ Essential advice & your help We’re taking the top two points from Y-Combinator’s Essential Startup Advice(posted next to our coffee machine) to heart: Launch now, and Build something people want. We published a survey on blockchain testnets and we’re laser-focused on building our own public testnet, implementing information dispersal algorithm, state syncing and resharding at the moment. Lastly, we need your help on hiring database engineers to hack Byzantine agreements and broadcasts, and on bringing in strategic investors to run Harmony nodes all over the world! Stephen Tse Harmony CEO https://harmony.one/
Blowing the lid off the CryptoNote/Bytecoin scam (with the exception of Monero) - Reformatted for Reddit
Original post by rethink-your-strategy on Bitcointalk.org here This post has been reformatted to share on Reddit. What once was common knowledge, is now gone. You want a quality history lesson? Share this like wildfire. August 15, 2014, 08:15:37 AM
I'd like to start off by stating categorically that the cryptography presented by CryptoNote is completely, entirely solid. It has been vetted and looked over by fucking clever cryptographers/developers/wizards such as gmaxwell. Monero have had a group of independent mathematicians and cryptographers peer-reviewing the whitepaper (their annotations are here, and one of their reviews is here), and this same group of mathematicians and cryptographers is now reviewing the implementation of the cryptography in the Monero codebase. Many well known Bitcoin developers have already had a cursory look through the code to establish its validity. It is safe to say that, barring more exotic attacks that have to be mitigated over time as they are invented/discovered, and barring a CryptoNote implementation making rash decisions to implement something that reduces the anonymity set, the CryptoNote currencies are all cryptographically unlinkable and untraceable. Two other things I should mention. I curse a lot when I'm angry (and scams like this make me angry). Second, where used my short date format is day/month/year (smallest to biggest). If you find this information useful, a little donation would go a long way. Bitcoin address is 1rysLufu4qdVBRDyrf8ZjXy1nM19smTWd.
The Alleged CryptoNote/Bytecoin Story
CryptoNote is a new cryptocurrency protocol. It builds on some of the Bitcoin founding principles, but it adds to them. There are aspects of it that are truly well thought through and, in a sense, quite revolutionary. CryptoNote claim to have started working on their project years ago after Bitcoin's release, and I do not doubt the validity of this claim...clearly there's a lot of work and effort that went into this. The story as Bytecoin and CryptoNote claim it to be is as follows: They developed the code for the principles expressed in their whitepaper, and in April, 2012, they released Bytecoin. All of the copyright messages in Bytecoin's code are "copyright the CryptoNote Developers", so clearly they are one and the same as the Bytecoin developers. In December 2012, they released their CryptoNote v1 whitepaper. In September 2013, they released their CryptoNote v2 whitepaper. In November 2013, the first piece of the Bytecoin code was first pushed to Github by "amjuarez", with a "Copyright (c) 2013 amjuarez" copyright notice. This was changed to "Copyright (c) 2013 Antonio Juarez" on March 3rd, 2014. By this juncture only the crypto libraries had been pushed up to github. Then, on March 4th, 2014, "amjuarez" pushed the rest of the code up to github, with the README strangely referring to "cybernote", even though the code referred to "Cryptonote". The copyrights all pointed to "the Cryptonote developers", and the "Antonio Juarez" copyright and license file was removed. Within a few days, "DStrange" stumbled across the bytecoin.org website when trying to mine on the bte.minefor.co.in pool (a pool for the-other-Bytecoin, BTE, not the-new-Bytecoin, BCN), and the rest is history as we know it. By this time Bytecoin had had a little over 80% of its total emission mined.
Immediate Red Flags
The first thing that is a red flag in all of this is that nobody, and I mean no-fucking-body, is a known entity. "Antonio Juarez" is not a known entity, "DStrange" is not a known entity, none of the made up names on the Bytecoin website exist (they've since removed their "team" page, see below), none of the made up names on the CryptoNote website exist (Johannes Meier, Maurice Planck, Max Jameson, Brandon Hawking, Catherine Erwin, Albert Werner, Marec Plíškov). If they're pseudonyms, then say so. If they're real names, then who the fuck are they??? Cryptographers, mathematicians, and computer scientists are well known - they have published papers or at least have commented on articles of interest. Many of them have their own github repos and Twitter feeds, and are a presence in the cryptocurrency community. The other immediate red flag is that nobody, and I mean no-fucking-body, had heard of Bytecoin. Those that had heard of it thought it was the crummy SHA-256 Bitcoin clone that was a flop in the market. Bytecoin's claim that it had existed "on the deep web" for 2 years was not well received, because not a single vendor, user, miner, drug addict, drug seller, porn broker, fake ID card manufacturer, student who bought a fake ID card to get into bars, libertarian, libertard, cryptographer, Tor developer, Freenet developer, i2p developer, pedophile, or anyone else that is a known person - even just known on the Internet - had ever encountered "Bytecoin" on Tor. Ever. Nobody.
Before I start with some conjecture and educated guesswork, I'd like to focus on an indisputable fact that obliterates any trust in both Bytecoin's and CryptoNote's bullshit story. Note, again, that I do not doubt the efficacy of the mathematics and cryptography behind CryptoNote, nor do I think there are backdoors in the code. What I do know for a fact is that the people behind CryptoNote and Bytecoin have actively deceived the Bitcoin and cryptocurrency community, and that makes them untrustworthy now and in the future. If you believe in the fundamentals in CryptoNote, then you need simply use a CryptoNote-derived cryptocurrency that is demonstrably independent of CryptoNote and Bytecoin's influence. Don't worry, I go into this a little later. So as discussed, there were these two whitepapers that I linked to earlier. Just in case they try remove them, here is the v1 whitepaper and the v2 whitepaper mirrored on Archive.org. This v1/v2 whitepaper thing has been discussed at length on the Bytecoin forum thread, and the PGP signature on the files has been confirmed as being valid. When you open the respective PDFs you'll notice the valid signatures in them: signature in the v1 whitepaper signature in the v2 whitepaper These are valid Adobe signatures, signed on 15/12/2012 and 17/10/2013 respectively. Here's where it gets interesting. When we inspect this file in Adobe Acrobat we get a little more information on the signature . Notice the bit that says "Signing time is from the clock on the signer's computer"? Now normally you would use a Timestamp Authority (TSA) to validate your system time. There are enough public, free, RFC 3161 compatible TSAs that this is not a difficult thing. CryptoNote chose not do this. But we have no reason to doubt the time on the signature, right guys? crickets . See these references from the v1 whitepaper footnotes? Those two also appear in the v2 whitepaperth. Neither of those two footnotes refer to anything in the main body of the v1 whitepaper's text, they're non-existent (in the v2 whitepaper they are used in text). The problem, though, is that the Bitcointalk post linked in the footnote is not from early 2012 (proof screenshot is authentic: https://bitcointalk.org/index.php?topic=196259.0) . May 5, 2013. The footnote is referencing a post that did not exist until then. And yet we are to believe that the whitepaper was signed on 12/12/2012! What sort of fucking fools do they take us for? A little bit of extra digging validates this further. The document properties for both the v1 whitepaper as well as the v2 whitepaper confirms they were made in TeX Live 2013, which did not exist on 12/12/2012. The XMP properties are also quite revealing XMP properties for the v1 whitepaper XMP properties for the v2 whitepaper According to that, the v1 whitepaper PDF was created on 10/04/2014, and the v2 whitepaper was created on 13/03/2014. And yet both of these documents were then modified in the past (when they were signed). Clearly the CryptoNote/Bytecoin developers are so advanced they also have a time machine, right? Final confirmation that these creation dates are correct are revealed those XMP properties. The properties on both documents confirm that the PDF itself was generated from the LaTeX source using pdfTeX-1.40.14 (the pdf:Producer property). Now pdfTeX is a very old piece of software that isn't updated very often, so the minor version (the .14 part) is important. . pdfTeX 1.40.14 pushed to source repo on Feb 14, 2014 . This version of pdfTeX was only pushed to the pdfTeX source repository on February 14, 2014, although it was included in a very early version of TeX Live 2013 (version 2013.20130523-1) that was released on May 23, 2013. The earliest mentions on the Internet of this version of pdfTeX are in two Stack Exchange comments that confirm its general availability at the end of May 2013 (here and here). The conclusion we draw from this is that the CryptoNote developers, as clever as they were, intentionally deceived everyone into believing that the CryptoNote whitepapers were signed in 2012 and 2013, when the reality is that the v2 whitepaper was created in March, 2014, and the v1 whitepaper haphazardly created a month later by stripping bits out of the v2 whitepaper (accidentally leaving dead footnotes in). Why would they create this fake v2 whitepaper in the first place? Why not just create a v1 whitepaper, or not even version it at all? The answer is simple: they wanted to lend credence and validity to the Bytecoin "2 years on the darkweb" claim so that everyone involved in CryptoNote and Bytecoin could profit from the 2 year fake mine of 82% of Bytecoin. What they didn't expect is the market to say "no thank you" to their premine scam.
And Now for Some Conjecture
As I mentioned earlier, the Bytecoin "team" page disappeared. I know it exists, because "AtomicDoge" referred to it as saying that one of the Bytecoin developers is a professor at Princeton. I called them out on it, and within a week the page had disappeared. Fucking cowards. That was the event that triggered my desire to dig deeper and uncover the fuckery. As I discovered more and more oddities, fake accounts, trolling, and outright falsehoods, I wondered how deep the rabbit hole went. My starting point was DStrange. This is the account on Bitcointalk that "discovered" Bytecoin accidentally a mere 6 days after the first working iteration of the code was pushed to Github, purely by chance when mining a nearly dead currency on a tiny and virtually unheard of mining pool. He has subsequently appointed himself the representative of Bytecoin, or something similar. The whole thing is so badly scripted it's worse than a Spanish soap opera...I can't tell who Mr. Gonzales, the chief surgeon, is going to fuck next. At the same time as DStrange made his "fuck me accidental discovery", another Bitcointalk account flared up to also "accidentally discover this weird thing that has randomly been discovered": Rias. What's interesting about both the "Rias" and "DStrange" accounts are their late 2013 creation date (October 31, 2013, and December 23, 2013, respectively), and yet they lay dormant until suddenly, out of the blue, on January 20th/21st they started posting. If you look at their early posts side by side you can even see the clustering: Rias, DStrange. At any rate, the DStrange account "discovering" Bytecoin is beyond hilarious, especially with the Rias account chiming in to make the discovery seem natural. Knowing what we unmistakably do about the fake CryptoNote PDF dates lets us see this in a whole new light. Of course, as has been pointed out before, the Bytecoin website did not exist in its "discovered" form until sometime between November 13, 2013 (when it was last captured as this random picture of a college girl) and February 25, 2014 (when it suddenly had the website on it as "discovered"). This can be confirmed by looking at the captures on Wayback Machine: https://web.archive.org/web/*/http://bytecoin.org The CryptoNote website, too, did not exist in its current form until after October 20, 2013, at which time it was still the home of an encrypted message project by Alain Meier, a founding member of the Stanford Bitcoin Group and co-founder of BlockScore. This, too, can be confirmed on Wayback Machine: https://web.archive.org/web/*/http://cryptonote.org ~It's hard to ascertain whether Alain had anything to do with CryptoNote or Bytecoin. It's certainly conceivable that the whitepaper was put together by him and other members of the Stanford Bitcoin Group, and the timeline fits, given that the group only formed around March 2013. More info on the people in the group can be found on their site, and determining if they played a role is something you can do in your own time.~ Update: Alain Meier posted in this thread, and followed it up with a Tweet, confirming that he has nothing to do with CryptoNote and all the related...stuff.
The Bytecoin guys revel in creating and using sockpuppet accounts. Remember that conversation where "Rias" asked who would put v1 on a whitepaper with no v2 out, and AlexGR said "a forward looking individual"? The conversation took place on May 30, and was repeated verbatim by shill accounts on Reddit on August 4 (also, screenshot in case they take it down). Those two obvious sockpuppet/shill accounts also take delight in bashing Monero in the Monero sub-reddit (here are snippets from WhiteDynomite and cheri0). Literally the only thing these sockpuppets do, day in and day out, is make the Bytecoin sub-reddit look like it's trafficked, and spew angry bullshit all over the Monero sub-reddit. Fucking batshit insane - who the fuck has time for that? Clearly they're pissy that nobody has fallen for their scam. Oh, and did I mention that all of these sockpuppets have a late January/early February creation date? Because that's not fucking obvious at all. And let's not forget that most recently the sockpuppets claimed that multi-sig is "a new revolutionary technology, it was discovered a short time ago and Bytecoin already implemented it". What the actual fuck. If you think that's bad, you're missing out on the best part of all: the Bytecoin shills claim that Bytecoin is actually Satoshi Nakamoto's work. I'm not fucking kidding you. For your viewing pleasure...I present to you...the Bytecoin Batshit Insane Circus: . https://bitcointalk.org/index.php?topic=512747.msg8354977#msg8354977 . Seriously. Not only is this insulting as fuck to Satoshi Nakamoto, but it's insulting as fuck to our intelligence. And yet the fun doesn't stop there, folks! I present to you...the centerpiece of this Bytecoin Batshit Insane Circus exhibit... . Of course! How could we have missed it! The clues were there all along! The CryptoNote/Bytecoin developers are actually aliens! Fuck me on a pogostick, this is the sort of stuff that results in people getting committed to the loony bin. One last thing: without doing too much language analysis (which is mostly supposition and bullshit), it's easy to see common grammar and spelling fuck ups. My personal favorite is the "Is it true?" question. You can see it in the Bytecoin thread asking if it's Satoshi's second project, in the Monero thread asking if the Monero devs use a botnet to fake demand, and in the Dashcoin thread confirming the donation address (for a coin whose only claim is that they copy Bytecoin perfectly, what the fuck do they need donations for??).
Layer After Layer
All Tied Up in a Bow
I want to cement the relationship between the major CryptoNote shitcoins. I know that my previous section had a lot of conjecture in it, and there's been some insinuation that I'm throwing everyone under the bus because I'm raging against the machine. That's not my style. I'm more of a Katy Perry fan..."you're going to hear me roar". There were some extra links I uncovered during my research, and I lacked the time to add it to this post. Thankfully a little bit of sleep and a can of Monster later have given me the a chance to add this. Let's start with an analysis of the DNS records of the CN coins. If we look at the whois and DNS records for bytecoin.org, quazarcoin.org, fantomcoin.org, monetaverde.org, cryptonote.org, bytecoiner.org, cryptonotefoundation.org, cryptonotestarter.org, and boolberry.com, we find three common traits, from not-entirely-damming to oh-shiiiiiiit:
There's a lot of commonality with the registrar (NameCheap for almost all of them), the DNS service (HurricaneElectric's Free DNS or NameCheap's DNS), and with the webhost (LibertyVPS, QHosteSecureFastServer.com, etc.)
All of the CN domains use WhoisGuard or similar private registration services.
Every single domain, without exception, uses Zoho for email. The only outlier is bitmonero.org that uses Namecheap's free email forwarding, but it's safe to disregard this as the emails probably just forward to the CryptoNote developers' email.
The instinct may be to disregard this as a fucking convenient coincidence. But it isn't: Zoho used to be a distant second go Google Apps, but has since fallen hopelessly behind. Everyone uses Google Apps or they just use mail forwarding or whatever. With the rest of the points as well, as far-fetched as the link may seem, it's the combination that is unusual and a dead giveaway of the common thread. Just to demonstrate that I'm not "blowing shit out of proportion" I went and checked the records for a handful of coins launched over the past few months to see what they use. darkcoin.io: mail: Namecheap email forwarding, hosting: Amazon AWS, open registration through NameCheap monero.cc: mail: mail.monero.cc, hosting: behind CloudFlare, open registration through Gandi xc-official.com: mail: Google Apps, hosting: MODX Cloud, hidden registration (DomainsByProxy) through GoDaddy blackcoin.io: mail: Namecheap email forwarding, hosting: behind BlackLotus, open registration through NameCheap bitcoindark.org: mail: no MX records, hosting: Google User Content, open registration through Wix viacoin.org: mail: mx.viacoin.org, hosting: behind CloudFlare, closed registration (ContactPrivacy) through Hostnuke.com neutrinocoin.org: mail: HostGator, hosting: HostGator, open registration through HostGator There's no common thread between them. Everyone uses different service providers and different platforms. And none of them use Zoho. My next check was to inspect the web page source code for these sites to find a further link. If you take a look at the main CSS file linked in the source code for monetaverde.org, fantomcoin.org, quazarcoin.org, cryptonotefoundation.org, cryptonote-coin.org, cryptonote.org, bitmonero.org, and bytecoiner.org, we find a CSS reset snippet at the top. It has a comment at the top that says "/* CSS Reset /", and then where it resets/sets the height it has the comment "/ always display scrollbars */". Now, near as I can find, this is a CSS snipped first published by Jake Rocheleau in an article on WebDesignLedger on October 24, 2012 (although confusingly Google seems to think it appeared on plumi.de cnippetz first, but checking archive.org shows that it was only added to that site at the beginning of 2013). It isn't a very popular CSS reset snippet, it got dumped in a couple of gists on Github, and translated and re-published in an article on a Russian website in November, 2012 (let's not go full-blown conspiritard and assume this links "cryptozoidberg" back to this, he's culpable enough on his own). It's unusual to the point of being fucking impossible for one site to be using this, let alone a whole string of supposedly unrelated sites. Over the past few years the most popular CSS reset scripts have been Eric Meyer's "Reset CSS", HTML5 Doctor CSS Reset, Yahoo! (YUI 3) Reset CSS, Universal Selector ‘’ Reset, and Normalize.css, none of which contain the "/ CSS Reset /" or "/ always display scrollbars */" comments. You've got to ask yourself a simple question: at what point does the combination of all of these fucking coincidental, completely unusual elements stop being coincidence and start becoming evidence of a real, tenable link? Is it possible that bytecoin.org, quazarcoin.org, fantomcoin.org, monetaverde.org, cryptonote.org, bytecoiner.org, cryptonotefoundation.org, cryptonotestarter.org, and boolberry.com just happen to use similar registrars/DNS providers/web hosts and exactly the fucking same wildly unpopular email provider? And is it also possible that monetaverde.org, fantomcoin.org, quazarcoin.org, cryptonotefoundation.org, cryptonote-coin.org, cryptonote.org, and bytecoin.org just happen to use the same completely unknown, incredibly obscure CSS reset snippet? It's not a conspiracy, it's not a coincidence, it's just another piece of evidence that all of these were spewed out by the same fucking people.
The Conclusion of the Matter
Don't take the last section as any sort of push for Monero. I think it's got potential (certainly much more than the other retarded "anonymous" coins that "developers" are popping out like street children from a cheap ho), and I hold a bit of XMR for shits and giggles, so take that tacit endorsement with a pinch of fucking salt. The point is this: Bytecoin's 82% premine was definitely the result of a faked blockchain. CryptoNote's whitepaper dates were purposely falsified to back up this bullshit claim. Both Bytecoin and CryptoNote have perpetuated this scam by making up fake website data and all sorts. They further perpetuate it using shill accounts, most notably "DStrange" and "Rias" among others. They launched a series of cryptocurrencies that should be avoided at all cost: Fantomcoin, Quazarcoin, and Monetaverde. They are likely behind duckNote and Boolberry, but fuck it, it's on your head if you want to deal with scam artists and botnet creators. They developed amazing technology, and had a pretty decent implementation. They fucked themselves over by being fucking greedy, being utterly retarded, being batshit insane, and trying to create legitimacy where there was none. They lost the minute the community took Monero away from them, and no amount of damage control will save them from their own stupidity. I expect there to be a fuck-ton of shills posting in this thread (and possibly a few genuine supporters who don't know any better). If you want to discuss or clarify something, cool, let's do that. If you want to have a protracted debate about my conjecture, then fuck off, it's called conjecture for a reason you ignoramus. I don't really give a flying fuck if I got it right or wrong, you're old and ugly enough to make up your own mind. tl;dr - CryptoNote developers faked dates in whitepapers. Bytecoin faked dates in fake blockchain to facilitate an 82% premine, and CryptoNote backed them up. Bytecoin, Fantomcoin, Quazarcoin, Monetaverde, Dashcoin are all from the same people and should be avoided like the fucking black plague. duckNote and Boolberry are probably from them as well, or are at least just fucking dodgy, and who the fuck cares anyway. Monero would have been fucking dodgy, but the community saved it. Make your own mind up about shit and demand that known people are involved and that there is fucking transparency. End transmission. Just a reminder that if you found this information useful, a little donation would go a long way. Bitcoin address is 1rysLufu4qdVBRDyrf8ZjXy1nM19smTWd.
And it is tested quite a bit; by one estimate, tests make up nearly 20 percent of the codebase. The community’s ‘fault’ keeping bitcoin free of errors is a shared responsibility. The share count displays have been renamed to Shares and Errors from the earlier Accepted and Rejected. The update also includes some minor performance improvements and bug fixes for the application. The Bitcoin Miner app is an easy to use app for user having a Windows 10 Mobile or PC due to its easy and streamlined interface. Bitcoin Hack, Mt. Gox. Launched in 2010, Japanese bitcoin exchange, Mt. Gox, was the largest in the world. After being hacked in June, 2011, Mt. Gox stated that they’d lost over 850,000 bitcoins (worth around half a billion US dollars at the time of writing). Over 11 years into bitcoin’s trajectory, it’s still a commonly held and incorrect belief that bitcoin transactions are anonymous. They may not include your name, but without taking proper steps, data can be aggregated about your addresses and wallets that could tie your identity to your funds. This is a followup to #10783. The first commit doesn't change behavior at all, just simplifies code. The second commit just changes RPC methods to treat null arguments the same as missing arguments instead of throwing type errors. The third commit updates developer notes after the cleanup. The forth commit does some additional code cleanup in getbalance.
Roger Ver vs Bitcoin Error Log Bitcoin Scaling debate ...
Buying Bitcoin from exchanges like Coinbase is a great way to get into cryptocurrency, but storing your coins on exchanges is not always best. Here I show you how easy it is to move Bitcoin from ... pool.50btc.com:8332 01/10/2013 11:53:21, IO errors - 1, tolerance 2 #izzylaif. Category Science & Technology; ... How to BitCoin mine using fast ASIC mining hardware - Duration: 27:15. Bitcoin Private is a hard fork of Bitcoin (BTC) and ZClassic (ZCL) to combine with the privacy of zk-snarks and foundation of Bitcoin Blockchain. Bitcoin Pri... Coinbase Address White Listing... Secure your account and avoid sending to the wrong address... - Duration: 3:35. Crypto Guide 805 views The negative side of buying bitcoins with these platforms. Coin base wallet https://www.coinbase.com/join/58df1408c90f8902ef56f64a Changelly wallet https://c...