FPGA- And ASIC-Based Mining Devices - All About Bitcoin
FPGA- And ASIC-Based Mining Devices - All About Bitcoin
FPGA Bitcoin Miner – BitcoinWiki
Bitmain Technologies Bitcoin Mining: Powered by Xilinx
Cryptocurrency Mining: Why Use FPGA for Mining? FPGA vs
FPGA Virtual Currency Miners for sale | eBay
Best $100-$300 FPGA development board in 2018?
Hello, I’ve been trying to decide on a FPGA development board, and have only been able to find posts and Reddit threads from 4-5 years ago. So I wanted to start a new thread and ask about the best “mid-range” FGPA development board in 2018. (Price range $100-$300.) I started with this Quora answer about FPGA boards, from 2013. The Altera DE1 sounded good. Then I looked through the Terasic DE boards. Then I found this Reddit thread from 2014, asking about the DE1-SoC vs the Cyclone V GX Starter Kit: https://www.reddit.com/FPGA/comments/1xsk6w/cyclone_v_gx_starter_kit_vs_de1soc_board/ (I was also leaning towards the DE1-SoC.) Anyway, I thought I better ask here, because there are probably some new things to be aware of in 2018. I’m completely new to FPGAs and VHDL, but I have experience with electronics/microcontrollers/programming. My goal is to start with some basic soft-core processors. I want to get some C / Rust programs compiling and running on my own CPU designs. I also want to play around with different instruction sets, and maybe start experimenting with asynchronous circuits (e.g. clock-less CPUs) Also I don’t know if this is possible, but I’d like to experiment with ternary computing, or work with analog signals instead of purely digital logic. EDIT: I just realized that you would call those FPAAs, i.e. “analog” instead of “gate”. Would be cool if there was a dev board that also had an FPAA, but no problem if not. EDIT 2: I also realized why "analog signals on an FPGA" doesn't make any sense, because of how LUTs work. They emulate boolean logic with a lookup table, and the table can only store 0s and 1s. So there's no way to emulate a transistor in an intermediate state. I'll just have play around with some transistors on a breadboard. UPDATE: I've put together a table with some of the best options:
A very simple FPGA development board that plugs into a Raspberry Pi, so you have a "backup" hard-core CPU that can control networking, etc. Supports a huge range of pmod accessories. You can write a program/circuit so that the Raspberry Pi CPU and the FPGA work together, similar to a SoC. Proprietary bitstream is fully reverse engineered and supported by Project IceStorm, and there is an open-source toolchain that can compile your hardware design to bitstream. Has everything you need to start experimenting with FPGAs.
Xilinx Zynq 7-Series SoC - ARM Cortex-A9 processor, and Artix-7 FPGA. 125 IO pins. 1GB DDR2 RAM. Texas Instruments WiLink 8 wireless module for 802.11n Wi-Fi and Bluetooth 4.1. No LEDs or buttons, but easy to wire up your own on a breadboard. If you want to use a baseboard, you'll need a snickerdoodle black ($195) with the pins in the "down" orientation. (E.g. The "breakyBreaky breakout board" ($49) or piSmasher SBC ($195)). The snickerdoodle one only comes with pins in the "up" orientation and doesn't support any baseboards. But you can still plug the jumpers into the pins and wire up things on a breadboard.
Has one of the latest Xilinx SoCs. 2 GB (512M x32) LPDDR4 Memory. Wi-Fi / Bluetooth. Mini DisplayPort. 1x USB 3.0 type Micro-B, 2x USB 3.0 Type A. Audio I/O. Four user-controllable LEDs. No buttons and limited LEDs, but easy to wire up your own on a breadboard
Xilinx Zynq 7000 SoC (ARM Cortex-A9, 7-series FPGA.) 1 GB DDR3 RAM. A few switches, push buttons, and LEDs. USB and Ethernet. Audio in/out ports. HDMI source + sink with CEC. 8 Total Processor I/O, 40 Total FPGA I/O. Also a faster version for $299 (Zybo Z7-20).
Same as DE10-Standard, but not as many peripherals, buttons, LEDs, etc.
icoBoard ($100). (Buy it here.) The icoBoard plugs into a Raspberry Pi, so it's similar to having a SoC. The iCE40-HX8K chip comes with 7,680 LUTs (logic elements.) This means that after you learn the basics and create some simple circuits, you'll also have enough logic elements to run the VexRiscv soft-core CPU (the lightweight Murax SoC.) The icoBoard also supports a huge range of pluggable pmod accessories:
numato Mimas A7 ($149). An excellent development board with a Xilinx Artix 7 FPGA, so you can play with a bigger / faster FPGA and run a full RISC-V soft-core with all the options enabled, and a much higher clock speed. (The iCE40 FPGAs are a bit slow and small.)
I ordered a iCE40-HX8K Breakout Board to try out the IceStorm open source tooling. (I would have ordered an icoBoard if I had found it earlier.) I also bought a numato Mimas A7 so that I could experiment with the Artix 7 FPGA and Xilinx software (Vivado Design Suite.)
What can I do with an FPGA? / How many LUTs do I need?
VexRiscv is "A FPGA friendly 32 bit RISC-V CPU implementation." This is a RISC-V implementation written in SpinalHDL. VexRiscv has a lot of plugin and configuration options. The Murax SoC is a very light SoC that can run on an iCE40-HX8k (but probably not the 1k FPGA that only has 1,280 LUTs). The Briey SoC only runs on Xilinx or Altera FPGAs.
Why is getting into FPGA's such a crappy experience?
I'm a hobbyist and this is my first venture into FPGA's. I understand how FPGA works in theory. It's just a bunch of combinational logic connected by clock-connected flip-flops, whose topology and combinational functions can be programmed with a high level language. I bought a Xilinx board from embeddedmicro.com and I'm going to work through their tutorials. All I want to be able to do is specify a bunch of like registers and crap, and how to connect them with clocked flip-flops to do some really basic stuff like a simple CPU with 1-2 custom instructions or something. So why do I have to download a GIANT SIX GIGABYTE FILE TO DO THAT? What could this software possibly be doing that it needs to be that big? In a sane world, all I'd need is a board and a simple compiler which just takes the high-level language and turns it into the topology file to upload to the board. But in the insanity in which I am currently living, I have to download some gigantic IDE that is going to be huge and probably slower than mining bitcoins on an NES. I don't know because IT'S STILL DOWNLOADING. So to even get to the download, I had to log into the website, register, and give them a name and a physical address (and God forbid I should leave the "Company" field empty!). The Licensing crap on their website looks like you need an MBA to understand it. This company sells pieces of hardware, FFS! Why in Stallman's name can't they just make the software FOSS and let anyone download it instead of all this BS about WEBPACK this and annual upgrade that? Xilinx, in case you haven't noticed, in order for anyone to actually use your software, THEY HAVE TO BUY A CHIP OR BOARD AND YOU CAN MAKE MONEY OFF YOUR CUSTOMERS THAT WAY. CHARGING FOR SOFTWARE OR HAVING A BYZANTINE PROCESS FOR GETTING A FREE LICENSE MAKES ZERO SENSE FOR A HARDWARE COMPANY. Anyone know a place where you can just buy an FPGA board, plug it into a USB port, sudo apt-get install some FOSS compiler, type your Verilog or VHDL or whatever into emacs, run 2-5 commands and have a running design? If such a place doesn't exist, some startup needs to disrupt this industry. If you make it easy for people to develop for your HW, those devs will be inclined buy your product just to make their lives easier.
There's a pretty interesting debate in the AI space right now on whether FPGAs or ASICs are the way to go for hardware-accelerated AI in production. To summarize, it's more about how to operationalize AI - how to use already trained models with millions of parameters to get real-time predictions, like in video analysis or complex time series models based on deep neural networks. Training those AI models still seems to favor GPUs for now. Google seem to be betting big on ASICs with their TPU. On the other hand, Microsoft and Amazon seem to favor FPGAs. In fact Microsoft have recently partnered with Xilinx to add FPGA co-processors on half of their servers (they were previously only using Intel's Altera). The FPGA is the more flexible piece of hardware but it is less efficient than an ASIC, and have been notoriously hard to program against (though things are improving). There's also a nice article out there summarizing the classical FPGA conundrum: they're great for designing and prototyping but as soon as your architecture stabilizes and you're looking to ramp up production, taking the time to do an ASIC will more often be the better investment. So the question (for me) is where AI inference will be in that regard. I'm sure Google's projects are large scale enough that an ASIC makes sense, but not everyone is Google. And there is so much research being done in the AI space right now and everyone's putting out so many promising new ideas that being more flexible might carry an advantage. Google have already put out three versions of their TPUs in the space of two years Which brings me back to Xilinx. They have a promising platform for AI acceleration both in the datacenter and embedded devices which was launched two months ago. If it catches on it's gonna give them a nice boost for the next couple of years. If it doesn't, they still have traditional Industrial, Aerospace & Defense workloads to fall back on... Another wrinkle is their SoCs are being used in crypto mining ASICs like Antminer, so you never know how that demand is gonna go. As the value of BTC continues to sink there is constant demand for more efficient mining hardware, and I do think cryptocurrencies are here to stay. While NVDA has fallen off a cliff recently due to excess GPU inventory, XLNX has kept steady. XLNX TTM P/E is 28.98 Semiconductors - Programmable Logic industry's TTM P/E is 26.48 Thoughts?
Background: My degree is in computer science. I've worked various jobs in IT related fields. I have programmed in a smattering of languages (C/Java/etc) and mostly worked at an application layer and up. In school I did take several computer hardware classes, and in one we built our own 16-bit MIPS CPU (with pipelining) in a circuit simulator that one of the professors had written himself in Java. We created our own DRAM cells, NAND/AND/XOetc gates, we created our own shifters and multiplexers, stuff like that. But again...in a Java GUI, not anything practical like verilog. End goal: I like performance. I like to tinker and make things go faster. I missed the bitcoin (and altcoin) FPGA rush by about 2-3 years. Currently, GPU mining is dominated by so-called 'ASIC resistant' algorithms which mostly resist ASICs by requiring a large amount of RAM (800MB-2GB, depending on algo) as part of the algorithm. Partly as a novelty for my personal use, and partly so that I have at least some level of knowledge of yet another IT field I can go into should I ever get laid off or sick of my current job, I want to learn how to program an FPGA for the purposes of implementing at least one of these ASIC-resistant algorithms. The energy savings over a GPU would be nice as well, as from what I can see most FPGAs are at least an order of magnitude, if not two, more efficient than a GPU. So, on to my questions:
Xilinx seems to be the number one 'newbie' FPGA maker (I say this because I see them mentioned more frequently than any other maker in threads foby newbies). Apparently I can download their software kit and use their simulator to design and test in software; I don't need the actual FPGA circuit/card until I am ready to put my design 'into production'. Am I correct in thinking this? Are there other manufacturers I should target instead?
Given my school experience designing circuits at something like a UML or CAD level, are there any recommended paths I should look into, or is learning VHDL/Verilog a pretty hard requirement to implementing any design? I'm asking because I'm ignorant.
Maybe best asked for later, but would the Xilinx simulator give me some idea of which FPGA I'd want to buy? Will it give me anything approaching an 'expected behavior' ('hey, it runs at 1h/s on your CPU, but on our Model XYZ it will run around 300-400 h/s!') Or will there be anything useful in helping me pick FPGAs? Because...I already looked, and holyfuckballs there are a shitload of FPGA models and makers and ohgod how do I pick from this literal shitstorm of models?!
Are high memory FPGAs a thing? I saw some stuff on Intel's website that indicated they had a (very expensive) FPGA that also had DDR4 slots built in. That would solve the memory issue. Do I actually need that though? Would it be the case that even a cheap (~$100) FPGA would end up having enough cells to implement 1GB of memory? I don't really care if I end up having a massive grid of registers, apart from the fact that, well, logic not used for computation is wasted energy and silicon space. But maybe 1GB of memory ends up being super tiny on an FPGA? Again, I understand how to build 1GB of memory in circuits, but not on an FPGA or in silicon.
Cryptocurrency Market - The Biggest Trends to watch out for 2018-2025
New market research study provides an analysis and evaluation of the current and prospective profitability, liquidity and financial stability of Global Cryptocurrency Industry. Cryptocurrency is a digital currency which operates on cryptographic techniques to complete safe transaction. Being decentralized with no governing body/central body involved in verifying transaction, secured protection and producing new currencies are projected to be the major reason for the market growth over the forecast period. Moreover, cryptocurrency’s community which include miners/stakers, developers, service providers, users etc. drive the governance of cryptocurrencies. The positive feedback loop has made the community more homogenous. Globally, cryptocurreny has been selected as digital payment method for the future financial world. These convenient currencies are completely digital requiring online transaction unlike physical cash. Hustle free transaction and deduction in entire ownership cost are few key features propelling the industry. Major drivers include authentication, ease of transaction, complete security, faster international transaction are expected to spur the market growth with steady performance. Moreover, the industry has not been confined with government rules, exchange rates, interest rates or international transaction fee, hence, making the currency more convenient for application. The currencies can also be transferred digitally via devices such as smartphones, since they are completely unrestricted from any centralized bank/authorities. Vendors and consumers prefer virtual money for making payments, henceforth, creating new opportunities for the market growth. Get PDF with Technological trends athttps://www.xpodenceresearch.com/Request-Sample/105813 Tax-free & compliance-free transactions, lesser chances of identity theft & fraud and negligible fee charged for cryptocurreny transaction are few other key elements augmenting the industry growth over the forecast period. Moreover, lack of awareness among the people and stringent rules and regulations for application of robots in various countries is expected to restrain the market growth. The emerging industry is projected to grow over the forecast period with more public awareness and continuous increase of new market players with innovative product/services. The market has been segmented into type of currency, mining types, and application. The type of currency segment includes Bitcoin, Litecoin, Ethereum, Ripple, and others. Mining type is segregated into solo and pool mining. The application segment includes banking, real estate, stock market and virtual currency. Other application for the market includes retail sector, gaming industry, education, logistics & transportation, BFSI, tourism sector, media and entertainment industry. BFSI is expected to acquire the major share followed by retail sector operating on cryptocurrencies. The cryptocurrency mining hardware includes Central Processing Unit (CPU) mining, Graphics Processing Unit (GPU) mining, Field-Program Gate Array (FPGA) mining, and Application-Specific Integrated Circuit (ASIC) mining. ASIC mining can calculate 10,000 times faster than conventional CPU mining. Increasing acceptance and potential growth for this industry have attracted various small vendors globally for competing in the market. Apart from Bitcoin, Litcoin has also gained prominence in the market over last few years, there are plenty of vendors in the market namely as Litecoin, Namecoin, Novacoin, Peercoin, Ripple, Steller, Primecoin, Megacoin, and many others. Geographically, the market is expended across North America (U.S., Mexico, and Canada), Europe (UK, France, Germany, and rest of Europe), Asia-Pacific (China, Japan, India, Australia, and rest of Asia-Pacific), and MEA (Middle East, Latin America, and Africa). North America region dominates the market owing to the regulations offered by the government. Brazil and Canada are other major regions using cryptocurreny due to rules and regulations Obtain Report Details with technological advancement athttps://www.xpodenceresearch.com/Reports/Cryptocurrency-Market Key market players include Intel Corporation, Microsoft Corporation, Xilinx, Inc., NVIDIA Corporation, 21 Inc. AlphaPoint Corporation , Amazon.com, Inc., Advanced Micro Devices, Inc, BTL Group Ltd.(Blockchain Tech), BitGo, BitFury Group , Coinbase UK, Ltd. Coinsecure, Unocoin, Coinbase, Bitstamp Ltd., Zebpay,, Poloniex Inc., Bitfury Group Limited, Global Area Holding Inc., Digital Limited, IBM Corp, are the other niche players. About Us: Xpodence Research is a U.S. based Market Research Company and offers the most extensive collection of progressive surveying syndicated and customized research reports of various categories for private and public industries across the globe. We offer the comprehensive market research solution for all the industries by performing the in-depth study of industry trends, verticals globally. We believe in building an eternal bond with our customers through providing them inclusive research study both customized and syndicated based on their specific requirements. The organizations in every industry such as Technology, Pharmaceuticals, Consumable Goods, Food & Beverage and others demands a market-based solutions for various significant decisions based on productivity and output globally. Our services are tailored specifically to our clients by proposing them the potential outcome, based on our in-depth analysis and insights for exploring the growth strategies through providing the best possible decision for quality production. Contact Us: Xpodence Research 244, Madison Avenue New York City, NY - 10016 United States Toll Free +1- 844-445-2861 Email: [[email protected]](mailto:[email protected])
I'm slowly learning about bitcoin and litecoin. I've thought a little lately about litecoin mining hardware. This is my analysis. I've only had tangential exposure to hardware design, so my estimates or assumptions might be off. Feedback welcome! The scrypt litecoin hash function is dominated by an operation called a salsa: it runs 2048 salsas for each hash, and each salsa involves reading/writing a 128B block from a 128KB scratch buffer. The requirement to have a 128KB buffer for each running hash is what makes scrypt difficult to accelerate. The 128B blocks are written successively in the first phase of 1024 salsas (the output of each salsa), and then read randomly in the second phase of 1024 salsas. I thought about implementing the salsa on a Xilinx FPGA. I implemented a few salsa building blocks to get an idea on timing. The Xilinx chips have 2KB distributed blocks of RAM, but there isn't nearly enough on-chip memory to support many concurrent hashes. One idea is to store every 64 salsa output in the first phase, and then recompute intermediate salsas as needed. This means you need to do an expected 32 extra salsas for each salsa in the second phase. Based on my experiments, it seemed like a 32 clock (latency) salsa running at 200+MHz is possible (or better, but this seems like the right order of magnitude) on an Artix-6 which costs about $300. The Artix-6 has 730 2KB buffers. Thus, I estimate:
730 (number of concurrent hashes) * (200M (clock frequency) / (1024 (salsas per phase) * (1 + 33) (expected computed salsas per salsa) * 32 (clock cycles per computed salsa))) = 130.6KH/s
This gives 0.44KH/$. A 7970 card gets 1.75KH/$. We're off by a factor of 4 in price/performance. This design might work in an ASIC. In a custom design, you can tune the trade-off between memory and computation, and probably improve the speed estimates above. I'm still trying to estimate the cost of such an ASIC design, but I'm a little out of my depth. I started to wonder why a 7970 gets such awesome price/performance. The other options is to put the 128KB blocks in DRAM. You don't need that much memory: 1GB gives you space for 8K concurrent hashes. But now you need high bandwidth to feed the salsa units. Each salsa reads 128B. A 7970 has a 260GB/s GDDR5 memory interface. That's
Actual reported rates are around 700KH/s. I think that is because of the random access patterns in the second phase of salsas. That's about 1.75KH/$. So the other option would be an ASIC with the salsa units and a GDDR5 memory interface like a 7970 board. I estimate (from octopart.com) the cost of the 3GB of DRAM on a 7970 card is about $60. Let's say the ASIC is $20 (about the cost of the bitcoin ASICs, but it might be wildly inaccurate for a chip and package that can support a 384 bit GDDR5 memory interface). Then we get 8.75KH/$, or about 5x the GPUs. Unfortunately, GDDR5 is a bleeding edge memory standard. An FPGA couldn't possibly manage that level of performance at this point. Designing a GDDR5 board and memory controller would probably be extremely difficult. You could ask, what is the fatest DRAM interface supported by an FPGA? The Spartan-6 (approx $90 and up) can support a 64-bit DDR2 PC-800 interface. That's 1.6GB/s, so
1.6GB/s (bandwidth) / (2048 * 128) = 6KH/s.
A DDR2 PC-800 DIMM is about $14. That's a pathetic 0.06KH/$. You can manage 1066 or 1333 in a faster part, but that doesn't help price/performance. tl;dr: Trading memory for recompute puts FPGAs about 4x behind GPUs for price/performance in rough estimate. Same idea for ASIC is worth a closer look. GPUs are surprisingly efficient for scrypt! ASIC+GDDR5 memory is competitve, but design is out of reach for mere mortals. edit: formatting.
FPGA- And ASIC-Based Mining Devices Page 1: The Basics Of Bitcoin Page 2: Confessions Of An Accidental Bitcoin Miner Page 3: The Mining Algorithm And CPU Mining Page 4: GPU-Based Mining And Mining Hardware; Software; Firmware; Hardware. The X6500 is designed to be a flexible, extendable device. It is sold bare, without an enclosure or accessories. We believe that designing the best mining device that suits everyone is impossible, so we want you to decide exactly how to set it up. FPGA and ASIC hardware have many things in common. Indeed, both of them process very similar logic function-based operations and produce an important bitcoin mining power in a very efficient way. ASIC hardware was much more powerful than the FPGA miners, in consequence, crypto enthusiasts and companies decided to move towards the latest ASIC Bitmain produces Bitcoin mining machines along with the largest mines and hash rate tradeoff platform products plus services for the virtual currency mining industry. Low power mining machines. Zynq SoCs act as the master control system for Bitmain’s mining machines, offering high performance at a low BOM cost. A miner that makes use of a compatible FPGA Board. The miner works either in a mining pool or solo.. This is the first open source FPGA Bitcoin miner. It was released on May 20, 2011.
Cracked the 10Gh/s barrier mining 0xBitcoin on the Xilinx VCU1525 FPGA. I used a antminer fan and created a DIY cardboard funnel to help with the additional cooling. I'll be adding a rear heatsink ... Finally, a real FPGA mining rig that is easy to use, the TPS-1530 using the VU9P FPGA chip has finally been released for public sale by the Frontier Group! Let's review this FPGA miner! Subscribe ... ELE 432- FPGA Bitcoin Miner - Duration: 4:08. Burak 14,472 views. 4:08. 8 x Xilinx VCU1525 FPGA Crypto-Mining Rig Demo - Duration: 8:35. ... DIY Bitcoin Mining: Hardware (part1) - Duration: ... Unboxing the Xilinx VCU1525 FPGA. This massive FPGA is going to be used for cryptocurrency mining. I will be posting more on this card and getting it mining. 12A, 1.2V power supply for each array FPGA. Input is 12V nominal from jack, disk drive connector (Molex), PCIe graphics power connector (6 way) and Pheonix connector. Array Controller