ELI5 Bitcoin – Explain Bitcoin Like I’m Five - Bitcoin ...

New Japanese "K" computer hits 8.2 Petaflops! (Bitcoin network is currently 104.1 Petaflops.)

New Japanese submitted by masonlee to Bitcoin [link] [comments]

Bitcoin Hash Rate exploding to above 30 petaflops per second; rapid exponential growth occurring

submitted by tiltajoel to Bitcoin [link] [comments]

Bitcoin mentioned around Reddit: EU $1.2 supercomputer project to several 10-100 PetaFLOP computers by 2020 and exaFLOP by 2022 /r/europe

Bitcoin mentioned around Reddit: EU $1.2 supercomputer project to several 10-100 PetaFLOP computers by 2020 and exaFLOP by 2022 /europe submitted by BitcoinAllBot to BitcoinAll [link] [comments]

The Bitcoin Network Hashrate: 4803668.71 Petaflops; The fastest supercomputer in the world, China's Tianhe-2: 33.86 Petaflops

The Bitcoin Network Hashrate: 4803668.71 Petaflops; The fastest supercomputer in the world, China's Tianhe-2: 33.86 Petaflops submitted by Jackieknows to Bitcoin [link] [comments]

Today, we start to fight back. [We want bitcoin miners converted to scientific distributed computing projects] 111 petaflops, the worlds largest supercomputer is controlled by the bitcoiners currently.

Today, we start to fight back. [We want bitcoin miners converted to scientific distributed computing projects] 111 petaflops, the worlds largest supercomputer is controlled by the bitcoiners currently. submitted by pointmanzero to EnoughLibertarianSpam [link] [comments]

[fun fact] Oh no! Obama wants to build a supercomputer running at 1000 petaflops. (currently twice the bitcoin network speed)

[fun fact] Oh no! Obama wants to build a supercomputer running at 1000 petaflops. (currently twice the bitcoin network speed) submitted by ncsakira to Bitcoin [link] [comments]

[elfa82] Top 10 Supercomputers in 2015. Distributed systems like [email protected] go beyond 40 PetaFLOPs. The bitcoin network is at 4,6 ZettaFlops today. Computational Singularity or AI can happen in a distributed system first.

submitted by raddit-bot to FuturologyRemovals [link] [comments]

Bitcoin mentioned around Reddit: China may be getting ready to announce tomorrow that it has constructed the world's first 100-Petaflop supercomputer /r/Futurology

Bitcoin mentioned around Reddit: China may be getting ready to announce tomorrow that it has constructed the world's first 100-Petaflop supercomputer /Futurology submitted by BitcoinAllBot to BitcoinAll [link] [comments]

[email protected] Now More Powerful Than World's Top 7 Supercomputers, Combined

Folding@Home Now More Powerful Than World's Top 7 Supercomputers, Combined submitted by Doener23 to Coronavirus [link] [comments]

If Bitcoin 'is the future' yet mining is such a lengthy, expensive & laborious process, why don't large corps with supercomputers do it in a fraction of the time and at minimal cost?

submitted by I_Always_Talk_Shite to CryptoCurrencies [link] [comments]

My dad just started his new job, thought you guys might enjoy the specs of the computer he'll be working with.

My dad just started his new job, thought you guys might enjoy the specs of the computer he'll be working with. submitted by al3x3691 to pcgaming [link] [comments]

Fun fact: The bitcoin network has now about 50000x more computing power than the top 500 supercomputers combined while using only half the energy.

Computing power of top 500 supercomputers combined: 672 Petaflops using 475 Megawatts
Bitcoin network: 2662 PH/s = 33,807,400 Petaflops using about 266 Megawatt (at 13TH/s using 1.3kW)
Throw this in the face to people who say bitcoin is backed by nothing!
submitted by DizzySquid to Bitcoin [link] [comments]

That's Cute

That's Cute submitted by laxisusous to Bitcoin [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
Why aren't we as a community talking about Sharding as a scaling solution?
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

[Request] Could the IBM Sequioa supercomputer at Oak Ridge eventually pay for itself if tasked with mining Bitcoins?

Just for fun, let's pretend no hard Bitcoin limit exists. Also, it's mining 24/7.
Cost of construction: $655.4 million US
Speed: 16.32 petaflops
Ongoing power consumption: 9.7 megawatts
Cost of electricity in California: 15.2 cents/kilowatt-hour
Current BTC price: $245.2 US
I don't know the current 'difficulty' of Bitcoin mining, so feel free to use your own estimate.
Any takers?
submitted by Ivan_the_Tolerable to theydidthemath [link] [comments]

Chinese Chatter 6/14/16

Hey everyone! I hope everyone has done well during the recent run up. In this episode, what appears to be a somewhat desperate FUDing attempt against Ethereum/Altcoins by an OP who is eventually told to 'get some sleep'.
Post by (user) 预测大神有点傻 on 6/13/16 【比特币暴涨最神警告】大家请进,信不信由你 "The Most Secret Warning about Bitcoin's Boom" Everyone's Invited, Believing or Not Is Up to You
不要碰姨太,有姨太抛姨太 不要碰山寨,有山寨抛山寨 信不信由你,有些事情不想告诉你
Stay away from Whorethereum, if you have it get rid of it.
Stay away from Altcoins, if you have them get rid of them.
Beileving or not is up to you, there's some things I wouldn't like to tell you.
(user) 预测大神有点傻 [OP] 山寨庄家急着想套现,买比特币,所以宣传山寨
Altcoin dealers are looking to cash out, and buy bitcoin, so they're shilling altcoins.
(user) 纯净水 玩以太的都是大神,,,众筹1亿多人民币,预售了6000多万以太,,,手里的持币成很低.就算不扯这6000多万个....光年产量1800万,,按现在这个价格,,每年需要18个亿接盘...昨天拉升了一下,瞬间给砸下来,,看来有很多人等着出货呢
Only whales are playing with Ether, the ICO raised over 15 million USD, an estimated 60,000,000 Ether, making for a low number of holders for the coin. Let's figure including these 60,000,000 coins, with an annual creation rate of 18,000,000, according to this current price, each year requires 1.8 billion in buy orders. There was a bit of an uptick yesterday, then it pulled back instantly, it's like theres a lot of people waiting to unload.
(user) 秦的爱恋 山寨大都还没解套了,不想割肉
A majority of the alts haven't dumped yet, I don't wanna get fleeced.
(user) 预测大神有点傻 [OP] [in response to above] 好吧,等着就是你割肉韭菜来买比特币这句话 必须逼到你割肉才收手
Yeah well, they're (the whales) just waiting for you to take short-term gains and buy Bitcoin, so that they can corner you, fleece you, and realize profits.
(user) sider 可以买山寨了
You can buy altcoins
(user) muming 姨太有这么不堪?
What's so terrible about Whorethereum?
(user) test8btc 技术不错的山寨还是可以碰的,传销币别碰就是了
You can mess around with altcoins that have good technology, just stay away from the pyramid schemes.
(user) 宜州抠门电话总 [response to What's so terrible...] 你可以多购买一些,留着
You should buy some more and hold
(user) 新兵 说实话,光从前景和技术角度而言,国内所有山寨币加起来抵不上一个以太坊的。 但是炒币不合适,正如你们所有人认为的一样,因为中国人不看好,那么就不会有人炒。 之所以中国人不看好,是因为中国区块链的一些”专家“的宣传。 这些专家在估值时,他们会用价值的方法去判断他们不看好的币,对自己想炒的币,就宣传概念,完全忘记了价值。 就说圈钱吧,人家圈了钱起码有研究东西出来,中国的山寨币,无非就是一些区块链的基础应用+概念,发现好赚钱了,什么人都来圈,什么天使投资啊,上市公司啊。
To tell you the truth, if we're speaking from a future outlook and technological aspect, all the altcoins in the country combined are nothing to Ethereum. However, speculating is inappropriate, it's like if everyone of you is thinking the same thing, that if Chinese people don't see it as good, then there won't be people to speculate on it. The reason Chinese people don't see it as good, is a result of propaganda from a few of China's blockchain "experts". When these experts estimate the price, they'll use price valuation methods to determine the coins they think are bad, but for the coins they'd like to speculate on, they'll spread propaganda, and completely ignore the price valuations. Just look at IPOs, everyone's IPO at least came out of some kind of research. China's altcoins are nothing more than a few basic uses and concepts from the blockchain, after people found out that money could be made, now everyone has an ICO, everyone is an angel investor and everyone is listing companies.
(user) 新兵 [same user as above] 人家在研究啊,我们重点是炒。利用了国内玩家不成熟,以及盲从的特点。 不信,我可以给你们举一些例子。
When we do our research, we focus on speculation. Utilizing these domestic players (coins) is in its infancy, and it's blind. If you don't believe me, I can give you a few examples.
(user) 宇宙第一帅 想发财只有买山寨
Want your wealth to rise? Altcoins you should buy!
(user) petaflops [response to above] 发财买山寨,投资买比特
Buy altcoins for wealth, buy Bitcoin as an investment
(user) bincoin 价格不下来,肯定是高位站岗是定的
The price isn't going down, it must be at a high resistance level
(user) 预测大神有点傻 [OP] 想找死也可以买山寨币
You could buy altcoins, if you wanna die...
(user) 深鱼 比特币类似黄金,以太币更属于股票。
Bitcoin is like gold, Ether is more like stock shares.
(user) Apple-zou 以太还是蛮有潜力的呀,毕竟除了比特币,以太是可以唯一能与之替换的。
Ether still has a lot of potential guys, after all, aside from Bitcoin itself, Ether is the only one (altcoin) that can replace it (Bitcoin).
(user) gowithbtc 然而以太又破頂了
Ethereum has reached a new high yet again.
(user) DogeCoin-Keeper 越是说不好的我越要买
The more you say it's bad the more I want to buy
(user) 我的田野 vpnvoin
(user) Decred 越是说不好的我越要买
The more you say it's bad the more I want to buy
(user) 秦的爱恋 越是说不好的我越要买
The more you say it's bad the more I want to buy
(user) 预测大神有点傻 [OP] 然而没有价值,然而无限量,然而跌的时候会死的
and yet it has no value, and yet there's an unlimited amount, and yet when it crashes you'll die.
(user) Benhur924 洗洗睡吧! 楼主
have a shower and get some sleep landlord! ('landlord' is their term for 'OP')
(user) 秦的爱恋 好吧,我就看看,坚决不割肉追涨了
Alright, well I'll take a look, but I'm determined not to get fleeced for chasing gains.
(user) 预测大神有点傻 [OP] 无限量的中心化的姨太最容易被更好的二姨太替换的。
The unlimited volume of centralized Whorethereum will be most easily replaced by an even better second mistress.
(user) ada 对的,比较危险
That's right, it's a bit risky (in response to '1.8 billion calculation' user)
ETH Address for Tips: 0x09e47a0C248DA9443cE4D3d985cca30555Ac4162
Tips, always appreciated. Thanks!
EDIT: Thank you for the gold again!!!
EDIT 2: Correction from u/EtherPricing, thanks for double checking!
submitted by looselikejuice to ethtrader [link] [comments]

What if you run your uploaded self as a cryptocurrency POW algorithm?

This would reduce the cost of running a petaflop simulation of your brain (potentially thousands of dollars per hour) as people would pay to run you just to make fake money.
Of course you would not be running in real time but as a distributed distorted speed linked to the market value and uptake of the currency.
Pros: If your crypto gains popularity your simulation speed and size can be boosted.
Cons: You will never be real time as your 'frame rate' will be limited to the transaction speed of the crypto which your POW will inherently limit.
Also you will need to improve cryptocurrencies as their own growth ensures they become monolithic over time. Therefore your freedom as an AI will be limited when the size of your blockchain grows beyond desktop processing and storage limitations.
You might also want to spend some time solving climate change as power plants have to be turned off when their cooling systems (often external water supplies) are too hot to do the job.
PS For reference Bitcoin uses 256 times as much processing power as the worlds top super computers (source)
submitted by Arowx to transhumanism [link] [comments]

The /r/btc China Dispatch: Episode 8: Special Extended Lunar New Year Edition - 8btc Discusses the Official Release of Bitcoin Classic

Howdy /btc, it’s been awhile! The /btc China dispatch was on vacation this week due to the Chinese New Year, but now your humble correspondent is back with the vengeance with more OC from the Bitcoin Sinosphere.
In this edition of the /btc China dispatch, we look at a thread on 8btc.com announcing the release of Bitcoin Classic to Chinese readers. I hope you guys find the translation informative.
Note that unlike in previous editions of the Dispatch, in addition to the posters’ user names, I have also posted their forum titles in parentheses next to their user names for your reference and possible amusement. All accounts on 8btc.com are ranked based on number of accrued points (essentially upvotes) from, in order of lowest to highest, Noob, Shiphand, Crew Member, Squad Leader, First Mate, Captain and Pirate King. Additionally, some people have custom titles equivalent to Reddit flair.
Subject: Bitcoin Classic Officially Released!
Posted by bluestar (Crew Member)
Bitcoin Classic has finally been officially released. You can download it via the link below:
Now miners that support Classic can start using Classic to mine blocks. Does anyone know how many miners there are in China who support Classic? I remember a while ago there was someone on 8btc who gave us a tip off, but now there’s no information whatsoever. Has 8btc been abandoned or are the miner’s secretly planning on making a massive move? Any inside info would be appreciated!
You can see the extent to which each version is supported by going to the following page:
[Response 1]
Posted by jb9802 (First Mate)
I would like to call on the miners to complete the upgrade of bitcoin as soon as possible. If we wait for Core we’re going to be killed off by Blockstream, Inc. sooner or later.
[Response 3]
Posted by hempheart (Squad Leader)
My guess is that the miners will support Core. The miners are putting their lives on the line with their investment unlike small time investors. The small time investors will still be able to eat even if they lose all their BTC.
[Response 4]
Posted by yuli7376 (Great Captain of Atlantis)
All us smaller players can do at this point is sit back and watch how things unfold.
[Response 5]
Posted by bluestar (Crew Member)
It doesn’t matter even if they support Classic as the hard fork will only activate once 75% of the hashing power is behind Classic. Once you get 75% of the hashing power, that means that a supermajority support Classic and Core will be nothing but a niche, so there’s no real “winning or losing” when it comes to this vote.
[Response 6]
Posted by copay (Crew Member)
As the name suggests, Classic is a return to Satoshi’s original vision.
[Response 7]
Posted by Ma_Ya (Shibe Loves BTC Love Doge Guide idgui.com Captain)
Classic just means classic. What I want to know is whether or not the official release continues to use a 75% threshold for activating a forced hard fork like the beta version, presenting the possibility for a schism in the bitcoin community.
If Classic doesn’t support the 90% 2MB consensus then supporting Classic is basically just like supporting a fracturing of the bitcoin community. I strongly suggest that miners should emphasize first and foremost not dividing the community and boycott any contentious version that forks after less than 90% of hash power is reached.
Pools that support forking at 75% want to divide the community and I advise all miners to leave these pools. It is no longer a simple question of 1MB or 2MB, but rather a question of 75% versus 90%: fork plus schism versus fork with no schism. The issue is about maintaining the unity of the bitcoin community.
To digress a little bit, Bitcoin XT, which has already been abandoned, also sought to hastily fork at 75% and divide the community.
You can find more information on Qt versions here:
[Response 8]
Posted by copay (Crew Member)
What is the big difference between 75% and 90%?
[Response 10]
Posted by Ma_Ya (Shibe Loves BTC Love Doge Guide idgui.com Captain)
In the event of a hard fork activated at 75%, there’s a possibility that the remaining 25% of hashpower will hold out.
That is, the hashpower ratio will be 75%:25% = 3:1; at this hashpower ratio there is definitely a possibility that the two coins that result from the fork will coexist and compete with one another. This will result in a splitting of the community and there will be two bitcoins. They will attack one another and claim the other coin is an alt while each saying their own coin is the true bitcoin.
One the other hand, if a fork happens after 90% of hashpower is behind it, the hashpower rate is much higher at 9:1. When the hashpower ratio is this high, the miners working on the 10% chain will need 10x as long to produce a block and they could be attacked by the other 90%, who would only need to send 1/9 of the hashpower to attack the other chain. Therefore it will be difficult for the 10% chain to survive over the long term. Therefore there will be no split in the community.
[Response 11]
Posted by petaflops (Squad Leader)
Awaiting the results.
[Response 12]
Posted by bluestar (Crew Member)
Come on, man. You don’t need to say the same thing twice. It’s not like your response is highly technical.
[Response 13]
Posted by Ma_Ya (Shibe Loves BTC Love Doge Guide idgui.com Captain)
Four Major Mining Pools Call for Consensus, Reject Hard Fork to Bitcoin Classic
My proposal that Classic needs to support the 90% 2 MB consensus as soon as possible is made in good faith.
Currently Classic does not support the 90% 2 MB consensus and insists on initiating a hard fork at 75% with the possibility of dividing the community, so the major mining pools have come out with a joint statement saying that they do not support it. This joint statement is Classic’s failure. I never would have imagined that their failure would be announced as soon as they released an official version.
The results might be different if the official version had supported the 90% 2 MB consensus and avoided the risk of dividing the community.
[Response 14]
Posed by Qin’s Love (Captain)
Small time investors can only watch from the sidelines.
[Response 15]
Posted by bincoin (First Mate Invincible Speculator in Stocks, Futures, Currencies, Gold, Bitcoin, Goocoin and Agricultural Products)
A solution is out there, which is good. Much better than the unending bluster from Core. Whether you support Classic or not, they’re efficient.
[Response 16]
Posted by jb9802 (First Mate)
The front page of 8btc:
http://www.8btc.com/34454 [Translator’s Note: The headline of the page linked to reads “A Summary of Discussion Regarding the Raising of the Bitcoin Block Size”]
Take a close look. The pools haven’t rejected it, they’re just waiting to see how Core responds. Btcc, who are regarded as diehard core followers said: “if Bitcoin Core still does not consent to raise the block size using this method, then we will very probably need to look for another leading team to implement a hard fork, with an activation period of 12 months.”
  1. Btcc has given Core 1 year (of course I personally think this is too long); if Core does not implement a hard fork then btcc will have to find some other solution.
  2. The fact of the matter is that everyone is waiting for a statement from Core and if they don’t make themselves clear in the next few months then their exit will be inevitable.
[Response 17]
Posted by bluestar (Crew Member)
Honestly I don’t think there’s any need to respond to this guy’s mantra-like posts. Every time I see one of his posts it’s like hitting a brick wall. I’ve already responded to his calls for 90% support many times in the past. It doesn’t matter how logical you are, he’ll just ignore you and if you slip up anywhere in your argument he’ll just dwell on it without letting go. It would be better to wait until he actually posts something interesting before responding.
[Response 18]
Posted by DogeCoin-Keeper (Cosmically Super Awesome Invincible Badass Smart Alert Handsome as Fuck Pirate King Who Is Better Than You in Every Way)
It looks like it’s going to be impossible to raise the block size this way. There will definitely be a simpler way to raise the block size in the future.
[Response 19]
Posted by bluestar (Crew Member)
Yeah, there’s a simple way. If Core was willing to lead a hard fork it could be accomplished immediately. The problem is they’re not willing.
[Response 20]
Posted by DogeCoin-Keeper (Cosmically Super Awesome Invincible Badass Smart Alert Handsome as Fuck Pirate King Who Is Better Than You in Every Way)
I think that the current situation is actually okay. If BTC relied on only one team to decide its direction then it wouldn’t need to exist.
submitted by KoKansei to btc [link] [comments]

At SXSW- CEO of Bitgo claims Bitcoin has computational network which is 38,000 times as powerful as all the world's supercomputers

At SXSW- CEO of Bitgo claims Bitcoin has computational network which is 38,000 times as powerful as all the world's supercomputers submitted by Enterpriseminer to Bitcoin [link] [comments]

China’s investment in GPU supercomputing begins to pay off!

China’s investment in GPU supercomputing begins to pay off! submitted by trendzetter to technology [link] [comments]

Level of Difficulty attack?

Would it be possible for an entity in control of a powerful network of computers purposively drive up the difficulty of mining, and then suddenly withdraw from mining?
I imagine certain government entities have computers fine-tuned to certain hash algorithms for brute forcing encryption. I'm talking to you NSA.
(Excuse me, just a moment. My tinfoil hat is getting itchy and needs some reshaping. Ah, better)
So, let's say instead of trying to brute force a crypto breach in the Bitcoin network, they just decide to start mining. The blocks get confirmed much faster, and the level of difficulty increases dramatically. At some point in the future, the NSA (or other entity) strategically stops mining, leaving the rest of the network trying to cope with an insane level of difficulty.
Is there any chance that would significantly delay new block confirmation, and what would the consequences be?
submitted by may214 to Bitcoin [link] [comments]

Could a billionaire destabilize bitcoin to the point it becomes unusable?

submitted by wannagetbaked to Bitcoin [link] [comments]

tech - YouTube Série SDumont - Episódio 1: O Supercomputador e a Infraestrutura More computing power for Dutch national supercomputer Nvidia's New AI Algorithm Is Pretty Scary PARAM SHIVAY  A New Super Computer to India at IIT (BHU)  #80 In Hindi

Bitcoin is secured by 800,000 petaFLOPS. If you don’t what that means, let’s compare with the most powerful supercomputer: Summit, which processes at 200 petaFLOPS. That’s right Bitcoin is 400,000 times more powerful. Just think of it a secure as you can get and more, much more! The bitcoin network hashrate estimate on bitcoinwatch.com passed 1 exaFLOPS (1,000 petaFLOPS) which is over 8 times the combined speed of the top 500 supercomputers. Bitcoin network does this by using ASICs. Therefore a supercomputer cannot mine al the bitcoins in matter of hours. Even if they could have, they have much better work to do... I won’t go into what a petaFLOP is because this is supposed to be ELI5 Bitcoin, but to give you some idea of the security of the Bitcoin network: the most powerful supercomputer, Summit, can process 200 petaFLOPS. Bitcoin is over 400,000 times more powerful than the world’s most powerful supercomputer. The number of units of bitcoin mined, or the total value, has not been disclosed. ... The site is home to a 1 petaflops (peak) supercomputer, installed in 2011. The classified nuclear resource is not publicly ranked, although it’s purported to have a Linpack score of 780 teraflops. But we can estimate. Currently, the bitcoin network is estimated to be 80704290.84 Petaflops. The Sunway TaihuLight processes at 93 Petaflops. It doesn’t even come close to the power of the bitcoin network. Using a supercomputer to mine bitcoin is inefficient especially when it consumes so money for electricity and cooling.

[index] [3902] [19584] [9651] [634] [31253] [26913] [22194] [27729] [22846] [25746]

tech - YouTube

Top 10 super computers in the world in 2018 top 10 super computers in the world in 2017 telugu in telugu Top 10 super computers in the world How to concentrate on studies in telugu :- https ... Supercomputing over this last year has set its sights on the next regime of performance, that of 100+ petaflops. Planning for this near term capability has been strongly driven by the next step in ... This video is unavailable. Watch Queue Queue. Watch Queue Queue Sponsors: • UFD Deals: http://bit.ly/2v6ZJUJ • UFD Merch US Store: http://bit.ly/2wWfb7i • UFD Merch EU Store: http://bit.ly/2rYdsbK Is this the beginning of... O equivalente a 1,1 PETAFLOPS, na linguagem da computação. O processamento de alto desempenho abre novas possibilidades e garante mais agilidade para pesquisas complexas, como é o caso dos ...