Dash Mining Calculator - My Crypto Buddy

Bitcoin ABC's "parked blocks" feature allows minority hashrate attackers to cause a permanent chain split with high probability.

BACKGROUND
------------------------------------------------------
In November, Bitcoin ABC introduced "auto-finalization" and "parked blocks" functionality in order to mitigate the risk of 51% attacks.
Roughly, the way auto-finalization works is that after receiving a new block, a node will look back ten blocks prior and mark that previous block as finalized, which means that the node will not reorg past that point without manual intervention. This prevents attackers from double spending with large reorgs, and thus provides some protection for exchanges and the like.
The parked blocks functionality is intended to prevent against medium-length reorgs by adding a proof of work penalty. Specifically, a 4+ block reorg requires double the PoW; a 1 or 2 block reorg requires an extra 1/2 blocks' worth of PoW, and a 3 block reorg requires an extra blocks' worth of PoW. These are approximations, because of BCH's DAA which changes each block, but you can find this implemented in FindMostWorkChain() in validation.cpp.
When these changes were added, there was some discussion on btc about how auto-finalization could lead to chain splits because an attacker could mine a 10 block secret chain and broadcast it right at the perfect time that the honest network has broadcast its 10th block from the fork point; however, this is a difficult attack to pull off, and the parked blocks functionality actually makes it such that the attacker would need to mine more like 20 blocks secretly (approximately, because of the DAA), which makes it nearly impossible.
However, I did not see significant discussion regarding the implications of the parked block functionality itself, and the negative way in which it interacts with auto-finalization. Here is my attempt to rectify that, by presenting an attack that could cause chain splits with moderate probability even for attackers with minority hash rate.

ATTACK:
----------------------------------------------------
1) Somehow force a soft-split with the parked chain.
2) Make sure that the soft split continues until both sides finalize a block on their side of the split (possibly via balancing hashpower on both sides of the soft split).
More specifically:
1) Mine 3 blocks before the honest network mines three blocks, and broadcast block 3 when you detect honest block 3 has been broadcast.
2) Mine such that the difficulty/work condition is fulfilled (see step 2 below regarding lowering the difficulty): block1 + 1/2*priv2 + 1/2*pub2 > pub4 + priv4. If this condition isn't met, the attacker can just try to win the split from the next step and get 3-4 block rewards
3) Ensure that each side of the soft split mines block 4 before the other side mines block 5, moving into the double-PoW penalty phase. This may require withholding blocks temporarily if "too far ahead", such that there is a 3vs3 split. Since this could happen, it improves our probability of success compared to the calculations below.
4) Mine at the tip of whichever chain is behind, such that neither side reorgs before finalizing a block on their side of the split. That is, each side must mine 7 blocks w/o being reorg'd. (Must mine 1 before the other mines 4, or 2 before 8, etc.)
--Analysis is for step 4 is complicated and thus omitted below, but this is likely to succeed, and extremely likely to succeed if the network is split close to evenly or if the attacker has substantial hash power.

Analysis:
----------------------------------------------------------
Let x = attacker hash rate; y = main chain hash rate after soft split; z = alt/"attack" chain hash rate after soft split.
-----STEP 1-------------
--Start by assuming the attacker has just mined a block and keeps it secret; then they must win 2 blocks before the honest chain wins 3
--There are 6 possible ways to win: AA, AHA, HAA, AHHA, HHAA, HAHA (A = attacker block; H = honest block.)
Pr[success] = x^2 + 2*x^2*(1-x) + 3*x^2*(1-x)^2
x = 1/20 1.4%
x = 1/10 5.23%
x = 1/4 26.17%
x = 1/3 40.74%
x = 1/2 68.75%
x = 3/5 82.08%
------STEP 2---------------
Ideally, the 4th block for both chains will be of the proper difficulty that would prevent either chain from being reorged by seeing that 4th block, in which case the soft split should persist. This occurs if the following conditions are met:
1) In order to prevent the 4th honest block from reorging our 3 private blocks, we need: privChainWork + 1/2*(privBlock1work + privBlock2work) > mainChainWork
2) In order to prevent the 4th attack block from reorging the 3 public blocks, we need: mainChainWork + 1/2*(pubBlock1work + pubBlock2work) > privChainWork
Note that pubBlock1work == privBlock1work in all cases. Some algebra:
Condition 1 priv1 + priv2 + priv3 + 1/2*priv1 + 1/2*priv2 > pub1 + pub2 + pub3 + pub4
Condition 2 pub1 + pub2 + pub3 + 1/2*pub1 + 1/2*pub2 > priv1 + priv2 + priv3 + priv4
priv1 + priv2 + priv3 > pub1 + pub2 + pub3 + pub4 - (1/2*priv1 + 1/2*priv2)
pub1 + pub2 + pub3 > priv1 + priv2 + priv3 + priv4 - (1/2*pub1 + 1/2*pub2)
pub1 + pub2 + pub3 > pub1 + pub2 + pub3 + pub4 - (1/2*priv1 + 1/2*priv2) + priv4 - (1/2*pub1 + 1/2*pub2)
0 > pub4 + priv4 - (1/2*priv1 + 1/2*priv2) - (1/2*pub1 + 1/2*pub2)
(1/2*priv1 + 1/2*priv2) + (1/2*pub1 + 1/2*pub2) > pub4 + priv4
block1 + 1/2*priv2 + 1/2*pub2 > pub4 + priv4
This is extremely likely if the difficulty is decreasing, which can happen on the private chain by mining blocks with future timestamps, and is likely on the main chain as well, since it will have less hashpower than before the fork point.
------STEP 3------------------------
--What is the probability that both chains mine block 4 before the other mines block 5?
--Assume both chains have 3 blocks already and the attacker has no secret blocks. If the attacker successfully mined block 4 already, our chances would be higher, so this is an underestimate.
--The chance of success depends on how much hash is on each side, which we don't know. Here we analyze two possibilities:
1) y = z (attacker's best case);
2) y = 3z (1/4 of the honest hash is on one side, 3/4 on the other)
--For case 1, assume the miner simply shuts off until one side wins a block, and then immediately mines the other side. They could also pick a side to start with, but we ignore that possibility. For case 2, we assume the attacker mines on the minority chain until a block is mined, and switches if the minority chain wins the first block.
Case 1:
Pr[success] = x + z = x + y
x = 1/20 52.5%
x = 1/10 55%
x = 1/4 62.5%
x = 1/3 66.67%
x = 1/2 75%
x = 3/5 80%
Case 2:
Pr[success] = (x+z)*(x+y) + y*(x+z) = x^2 + 2xy + 2yz + xz
x = 1/20 42.41%
x = 1/10 47.13%
x = 1/4 60.16%
x = 1/3 66.67%%
x = 1/2 78.13%
x = 3/5 84%
-----END STEP 3---------------------


EXAMPLE: Attacker has x fraction of the hash rate, and if he successfully finishes step 1, we assume that the difficulty works out in step 2 about half the time (probably a significant underestimate). Assume that each step is independent (not true, and causes an underestimate of success probability). Assume that the soft split that results splits the hash power 3 to 1, as in case 2 above. What is the probability of getting to the final step of the attack? How many initial blocks can the attacker expect to throw out before succeeding, and how long should this take given their hash rate?
x = 1/20 0.3% 334 blocks thrown out, 6680 blocks total, 46.4 days
x = 1/10 1.23% 82 blocks thrown out, 820 blocks total, 5.7 days
x = 1/4 7.87% 13 blocks thrown out, 52 blocks total, 8.7 hours
x = 1/3 13.58% 8 blocks thrown out, 24 blocks total, 4 hours
x = 1/2 26.86% 4 blocks thrown out, 8 blocks total, 1.3 hours
x = 3/5 34.47% 3 blocks thrown out, 5 blocks total, < 1 hour

DISCUSSION
-------------------------------------------
There are tradeoffs between protecting against chain-split attacks and protecting against deep reorgs, and a chain like that of BCH, with minority SHA-256 hashrate, must tread carefully. However, I think I have demonstrated here that there isn't a tradeoff when you have both parked blocks AND auto-finalization - the security assumption of "everything is fine if majority hashrate is honest" is no longer true, because a 33% attacker can cause a far worse outcome than a deep reorg.
That said, unlike with 51% attacks, this particular attack is one that isn't as likely to be profitable for the attacker financially, so it may not be exploited by random Internet bad guys. Also, it would require some more complicated software and is less likely to succeed without some amount of network intelligence, like knowing which nodes are miners or exchanges to target. However, it CAN still be profitable in a number of ways: shorting BCH, low-conf double spends, and/or the possible selfish mining profits that could accrue from a failure at some step of this strategy.
Timejacking may be useful to smooth over some of the parts of the attack by making sure that a timejacked node will view a block as valid/invalid when the rest of the network doesn't, via timestamp manipulation. This can buy the attacker a little bit of time, and "shape" the network such that he knows which side of the split his targets may be on. Furthermore, if the attacker fails to split the network somewhat evenly, then he can ignore the minority side of the fork and immediately start trying again on the majority side, in an attempt to cause a 3-way split.
Finally, while you may believe that this attack is improbable, the prevailing wisdom of this sub is that a super powerful cabal of bankers will stop at nothing to destroy Bitcoin Cash (operating via their alleged proxy, Blockstream), and since a well-resourced attacker should be able to pull this attack off, I believe y'all should be more concerned than you have been. My recommendation is to remove the parked block functionality from ABC entirely, and accept the risk of medium-length reorgs.
submitted by iwantfreebitcoin to btc [link] [comments]

News everyone

This is a big announcement for Electroneum. See bottom of this article for the full details of the blockchain tech update.
Some of our community have been worried about our blockchain tech, especially regarding ASIC miners and blockchain flooding. We will cover these below along with the announcement of some exciting major changes to the way our blockchain runs.
ETN Blockchain Update May 30th Monero (who we based our blockchain code on) perform an update approximately every 6 months, and this is a great practice, as it allows them to keep their technology moving forward and introduce new features. We will be following this model and our first major update (also known as a fork) is scheduled to take place at block 307500 which is approximately 10.30am BST on May 30th.
Don’t panic! – Forking explained It’s important for everyone to understand that whilst this is known as a fork, it is very different to Bitcoin forking to bitcoin cash or bitcoin gold. The fork will not result in two currencies, as all the exchanges and pools will update their software in advance of the update block and Electroneum will continue with an updated blockchain.
Time to test and implement The end of May gives our community plenty of time to test and comment on the code changes that we will post on GitHub by Friday 5th of May. It also gives plenty of time for every node owner to update their Electroneum nodes, ready for the update block.
Electroneum divergence from Monero We’ve always planned to move the Electroneum blockchain further towards reaching our goals, which in turn will move us away from Monero’s goals. We chose Monero because they’d written an awesome dynamic blocksize algorithm, but they also have some features that are not critical to Electroneum’s future. In this Electroneum update we’ve started to diverge away from some core Monero functionality. As we move towards a lean, fast blockchain to handle vast numbers of micropayments we need to lose some of the overhead that comes with the privacy of Monero. We are still going to be far more private than Bitcoin or Ethereum (for instance you won’t be able to look at someone else’s wallet balance), but by decreasing some of the privacy features we can fit significantly more transactions into a block, which is critical for our next stage of growth as our corporate partners start to bring on user numbers, and our vendor program starts delivering instant cryptocurrency acceptance into online and physical shops and stores. In short Monero is the best privacy coin in the world, where we need to be the best micropayment system in the world.
ETN Blockchain Tech Update (Details) Anti- ASIC. An ASIC is a computer chip that has been made for a specific task. In this instance the task is to mine the CryptoNight algorithm that Electroneum uses. We are implementing Anti-ASIC code to ensure we have maximum resistance to any network attack that could occur in the future. Limiting mining to GPU’s reduces the chances of a single entity possessing enough hashing power to attempt a 51% network attack. It’s important to note that there is no proof of a 51% network attack having taken place on the Electroneum blockchain. Transfer Fee Increase. There have been a lot of comments about our transfer fee being too low. It is important to our project that the fee remains low, because we are going to be focusing on instant payments and instant micropayments in the real world, and we need fees that are lower than typical debit / credit card fees. However, we have suffered from blockchain flooding so are taking steps to ensure we are resistant to this in the future. We have therefore decided to increase our base fee to 0.1 ETN. This is still a fraction of the cost of transfer of other cryptocurrencies, but still increases the difficulty of flooding by an order of magnitude. Combined with our other updates (below) this will give us more effective resistance to blockchain flooding. Increase block size before penalty. We have been enormously successful and seen some periods with huge amounts of legitimate blockchain transaction traffic. This, combined with blockchain flooding, has meant periods of blockchain delays. By increasing the block size before penalty, miners will be able to scale the blocks faster and get more transactions into a block. This will handle regular transactions and flood transactions, making delays less likely. Combined with the Fee increase this is a significant resistance enhancement to flooding. Disabling of RingCT & Mixin. RingCT was introduced by Monero whose main focus is privacy. Our main focus is bulk transactions for a mass audience, and thus we are disabling some of the privacy features of the blockchain. Disabling some privacy features means we can fit significantly more transactions into each block than with them enabled. This means less wait to get a transaction into a block and a leaner blockchain size. Wallets are still private as we will continue to use a new stealth wallet address for every blockchain transaction so there is still significantly more privacy than with Bitcoin or Ethereum, but considerably less privacy than with a privacy focused coin like Monero. Mempool life to 3 days. During high transactional volumes it is feasible that a transaction can remain in the mempool for 24 hours and reach the current limit. This would mean the transaction is returned to the sender, but that could take up to 7 days. By increasing the mempool life to 3 days (and in conjunction with some of the additional changes) we are ensuring a significant reduction in the possibility of these returned transactions. 2 minute blocks. Blocks are currently mined every minute. We are moving to two minute blocks which will significantly decrease the chance of an orphan block being created. Orphan blocks might contain transactions which will eventually (7 days) be returned or added to another block. Increasing the block time to 2 minutes has ramifications on the block reward which will be modified (see below). Block Reward. We are doubling the block reward to ensure daily ETN block reward remains the same, despite the fact that we are releasing blocks at half the current speed. This means there will be no discernable effect to miners or pools. Reduce difficulty window. Block difficulty window is being reduced. The block difficulty is calculated by looking at the last X blocks. It has come to our attention that by hitting the ETN blockchain with large powered rented hashing power gives the miner an advantage over a short period of time (until the difficulty algorithm catches up with the new hashing rate). We are reducing the difficulty window to reduce the benefit these periodic miners have and to discourage this practice, making the mining process fairer. This should have little or no effect on the difficulty number itself except during the exceptional circumstances described. Thanks for taking the time to read this update! If you are running an Electroneum node remember to update before May 30th. If you are using a pool, ensure you let their telegram or other social channel know that this update is critical and must be applied before May 30th, in advance of block 307500.
Have a great day everyone,
submitted by chindyagung to Electroneum [link] [comments]

The Nexus FAQ - part 1

Full formatted version: https://docs.google.com/document/d/16KKjVjQH0ypLe00aoTJ_hZyce7RAtjC5XHom104yn6M/
 

Nexus 101:

  1. What is Nexus?
  2. What benefits does Nexus bring to the blockchain space?
  3. How does Nexus secure the network and reach consensus?
  4. What is quantum resistance and how does Nexus implement this?
  5. What is Nexus’ Unified Time protocol?
  6. Why does Nexus need its own satellite network?
 

The Nexus Currency:

  1. How can I get Nexus?
  2. How much does a transaction cost?
  3. How fast does Nexus transfer?
  4. Did Nexus hold an ICO? How is Nexus funded?
  5. Is there a cap on the number of Nexus in existence?
  6. What is the difference between the Oracle wallet and the LLD wallet?
  7. How do I change from Oracle to the LLD wallet?
  8. How do I install the Nexus Wallet?
 

Types of Mining or Minting:

  1. Can I mine Nexus?
  2. How do I mine Nexus?
  3. How do I stake Nexus?
  4. I am staking with my Nexus balance. What are trust weight, block weight and stake weight?
 

Nexus 101:

1. What is Nexus (NXS)?
Nexus is a digital currency, distributed framework, and peer-to-peer network. Nexus further improves upon the blockchain protocol by focusing on the following core technological principles:
Nexus will combine our in-development quantum-resistant 3D blockchain software with cutting edge communication satellites to deliver a free, distributed, financial and data solution. Through our planned satellite and ground-based mesh networks, Nexus will provide uncensored internet access whilst bringing the benefits of distributed database systems to the world.
For a short video introduction to Nexus Earth, please visit this link
 
2. What benefits does Nexus bring to the blockchain space?
As Nexus has been developed, an incredible amount of time has been put into identifying and solving several key limitations:
Nexus is also developing a framework called the Lower Level Library. This LLL will incorporate the following improvements:
For information about more additions to the Lower Level Library, please visit here
 
3. How does Nexus secure the network and reach consensus?
Nexus is unique amongst blockchain technology in that Nexus uses 3 channels to secure the network against attack. Whereas Bitcoin uses only Proof-of-Work to secure the network, Nexus combines a prime number channel, a hashing channel and a Proof-of-Stake channel. Where Bitcoin has a difficulty adjustment interval measured in weeks, Nexus can respond to increased hashrate in the space of 1 block and each channel scales independently of the other two channels. This stabilizes the block times at ~50 seconds and ensures no single channel can monopolize block production. This means that a 51% attack is much more difficult to launch because an attacker would need to control all 3 channels.
Every 60 minutes, the Nexus protocol automatically creates a checkpoint. This prevents blocks from being created or modified dated prior to this checkpoint, thus protecting the chain from malicious attempts to introduce an alternate blockchain.
 
4. What is quantum resistance and how does Nexus implement it?
To understand what quantum resistance is and why it is important, you need to understand how quantum computing works and why it’s a threat to blockchain technology. Classical computing uses an array of transistors. These transistors form the heart of your computer (the CPU). Each transistor is capable of being either on or off, and these states are used to represent the numerical values 1 and 0.
Binary digits’ (bits) number of states depends on the number of transistors available, according to the formula 2n, where n is the number of transistors. Classical computers can only be in one of these states at any one time, so the speed of your computer is limited to how fast it can change states.
Quantum computers utilize quantum bits, “qubits,” which are represented by the quantum state of electrons or photons. These particles are placed into a state called superposition, which allows the qubit to assume a value of 1 or 0 simultaneously.
Superposition permits a quantum computer to process a higher number of data possibilities than a classical computer. Qubits can also become entangled. Entanglement makes a qubit dependant on the state of another, enabling quantum computing to calculate complex problems, extremely quickly.
One such problem is the Discrete Logarithm Problem which elliptic curve cryptography relies on for security. Quantum computers can use Shor’s algorithm to reverse a key in polynomial time (which is really really really fast). This means that public keys become vulnerable to quantum attack, since quantum computers are capable of being billions of times faster at certain calculations. One way to increase quantum resistance is to require more qubits (and more time) by using larger private keys:
Bitcoin Private Key (256 bit) 5Kb8kLf9zgWQnogidDA76MzPL6TsZZY36hWXMssSzNydYXYB9KF
Nexus Private Key (571 bit) 6Wuiv513R18o5cRpwNSCfT7xs9tniHHN5Lb3AMs58vkVxsQdL4atHTF Vt5TNT9himnCMmnbjbCPxgxhSTDE5iAzCZ3LhJFm7L9rCFroYoqz
Bitcoin addresses are created by hashing the public key, so it is not possible to decrypt the public key from the address; however, once you send funds from that address, the public key is published on the blockchain rendering that address vulnerable to attack. This means that your money has higher chances of being stolen.
Nexus eliminates these vulnerabilities through an innovation called signature chains. Signature chains will enable access to an account using a username, password and PIN. When you create a transaction on the network, you claim ownership of your signature chain by revealing the public key of the NextHash (the hash of your public key) and producing a signature from the one time use private key. Your wallet then creates a new private/public keypair, generates a new NextHash, including the corresponding contract. This contract can be a receive address, a debit, a vote, or any other type of rule that is written in the contract code.
This keeps the public key obscured until the next transaction, and by divorcing the address from the public key, it is unnecessary to change addresses in order to change public keys. Changing your password or PIN code becomes a case of proving ownership of your signature chain and broadcasting a new transaction with a new NextHash for your new password and/or PIN. This provides the ability to login to your account via the signature chain, which becomes your personal chain within the 3D chain, enabling the network to prove and disprove trust, and improving ease of use without sacrificing security.
The next challenge with quantum computers is that Grover’s algorithm reduces the security of one-way hash function by a factor of two. Because of this, Nexus incorporates two new hash functions, Skein and Keccak, which were designed in 2008 as part of a contest to create a new SHA3 standard. Keccak narrowly defeated Skein to win the contest, so to maximize their potential Nexus combines these algorithms. Skein and Keccak utilize permutation to rotate and mix the information in the hash.
To maintain a respective 256/512 bit quantum resistance, Nexus uses up to 1024 bits in its proof-of-work, and 512 bits for transactions.
 
5. What is the Unified Time protocol?
All blockchains use time-stamping mechanisms, so it is important that all nodes operate using the same clock. Bitcoin allows for up to 2 hours’ discrepancy between nodes, which provides a window of opportunity for the blockchain to be manipulated by time-related attack vectors. Nexus eliminates this vulnerability by implementing a time synchronization protocol termed Unified Time. Unified Time also enhances transaction processing and will form an integral part of the 3D chain scaling solution.
The Unified Time protocol facilitates a peer-to-peer timing system that keeps all clocks on the network synchronized to within a second. This is seeded by selected nodes with timestamps derived from the UNIX standard; that is, the number of seconds since January 1st, 1970 00:00 UTC. Every minute, the seed nodes report their current time, and a moving average is used to calculate the base time. Any node which sends back a timestamp outside a given tolerance is rejected.
It is important to note that the Nexus network is fully synchronized even if an individual wallet displays something different from the local time.
 
6. Why does Nexus need its own satellite network?
One of the key limitations of a purely electronic monetary system is that it requires a connection to the rest of the network to verify transactions. Existing network infrastructure only services a fraction of the world’s population.
Nexus, in conjunction with Vector Space Systems, is designing communication satellites, or cubesats, to be launched into Low Earth Orbit in 2019. Primarily, the cubesat mesh network will exist to give Nexus worldwide coverage, but Nexus will also utilize its orbital and ground mesh networks to provide free and uncensored internet access to the world.
 

The Nexus Currency (NXS):

1. How can I get Nexus?
There are two ways you can obtain Nexus. You can either buy Nexus from an exchange, or you can run a miner and be rewarded for finding a block. If you wish to mine Nexus, please follow our guide found below.
Currently, Nexus is available on the following exchanges:
Nexus is actively reaching out to other exchanges to continue to be listed on cutting edge new financial technologies..
 
2. How much does a transaction cost?
Under Nexus, the fee structure for making a transaction depends on the size of your transaction. A default fee of 0.01 NXS will cover most transactions, and users have the option to pay higher fees to ensure their transactions are processed quickly.
When the 3D chain is complete and the initial 10-year distribution period finishes, Nexus will absorb these fees through inflation, enabling free transactions.
 
3. How fast does Nexus transfer?
Nexus reaches consensus approximately every ~ 50 seconds. This is an average time, and will in some circumstances be faster or slower. NXS currency which you receive is available for use after just 6 confirmations. A confirmation is proof from a node that the transaction has been included in a block. The number of confirmations in this transaction is the number that states how many blocks it has been since the transaction is included. The more confirmations a transaction has, the more secure its placement in the blockchain is.
 
4. Did Nexus hold an ICO? How is Nexus funded?
The Nexus Embassy, a 501(C)(3) not-for-profit corporation, develops and maintains the Nexus blockchain software. When Nexus began under the name Coinshield, the early blocks were mined using the Developer and Exchange (Ambassador) addresses, which provides funding for the Nexus Embassy.
The Developer Fund fuels ongoing development and is sourced by a 1.5% commission per block mined, which will slowly increase to 2.5% after 10 years. This brings all the benefits of development funding without the associated risks.
The Ambassador (renamed from Exchange) keys are funded by a 20% commission per block reward. These keys are mainly used to pay for marketing, and producing and launching the Nexus satellites.
When Nexus introduces developer and ambassador contracts, they will be approved, denied, or removed by six voting groups namely: currency, developer, ambassador, prime, hash, and trust.
Please Note: The Nexus Embassy reserves the sole right to trade, sell and or use these funds as required; however, Nexus will endeavor to minimize the impact that the use of these funds has upon the NXS market value.
 
5. Is there a cap on the number of NXS in existence?
After an initial 10-year distribution period ending on September 23rd, 2024, there will be a total of 78 million NXS. Over this period, the reward gradient for mining Nexus follows a decaying logarithmic curve instead of the reward halving inherent in Bitcoin. This avoids creating a situation where older mining equipment is suddenly unprofitable, encouraging miners to continue upgrading their equipment over time and at the same time reducing major market shocks on block halving events.
When the distribution period ends, the currency supply will inflate annually by a maximum of 3% via staking and by 1% via the prime and hashing channels. This inflation is completely unlike traditional inflation, which degrades the value of existing coins. Instead, the cost of providing security to the blockchain is paid by inflation, eliminating transaction fees.
Colin Cantrell - Nexus Inflation Explained
 
6. What is the difference between the LLD wallet and the Oracle wallet?
Due to the scales of efficiency needed by blockchain, Nexus has developed a custom-built database called the Lower Level Database. Since the development of the LLD wallet 0.2.3.1, which is a precursor to the Tritium updates, you should begin using the LLD wallet to take advantage of the faster load times and improved efficiency.
The Oracle wallet is a legacy wallet which is no longer maintained or updated. It utilized the Berkeley DB, which is not designed to meet the needs of a blockchain. Eventually, users will need to migrate to the LLD wallet. Fortunately, the wallet.dat is interchangeable between wallets, so there is no risk of losing access to your NXS.
 
7. How do I change from Oracle to the LLD wallet?
Step 1 - Backup your wallet.dat file. You can do this from within the Oracle wallet Menu, Backup Wallet.
Step 2 - Uninstall the Oracle wallet. Close the wallet and navigate to the wallet data directory. On Windows, this is the Nexus folder located at %APPDATA%\Nexus. On macOS, this is the Nexus folder located at ~/Library/Application Support/Nexus. Move all of the contents to a temporary folder as a backup.
Step 3 - Copy your backup of wallet.dat into the Nexus folder located as per Step 2.
Step 4 - Install the Nexus LLD wallet. Please follow the steps as outlined in the next section. Once your wallet is fully synced, your new wallet will have access to all your addresses.
 
8. How do I install the Nexus Wallet?
You can install your Nexus wallet by following these steps:
Step 1 - Download your wallet from www.nexusearth.com. Click the Downloads menu at the top and select the appropriate wallet for your operating system.
Step 2 - Unzip the wallet program to a folder. Before running the wallet program, please consider space limitations and load times. On the Windows OS, the wallet saves all data to the %APPDATA%\Nexus folder, including the blockchain, which is currently ~3GB.
On macOS, data is saved to the ~/Library/Application Support/Nexus folder. You can create a symbolic link, which will allow you to install this information in another location.
Using Windows, follow these steps:
On macOS, follow these steps:
Step 3 (optional) - Before running the wallet, we recommend downloading the blockchain database manually. Nexus Earth maintains a copy of the blockchain data which can save hours from the wallet synchronization process. Please go to www.nexusearth.com and click the Downloads menu.
Step 4 (optional) - Extract the database file. This is commonly found in the .zip or .rar format, so you may need a program like 7zip to extract the contents. Please extract it to the relevant directory, as outlined in step 2.
Step 5 - You can now start your wallet. After it loads, it should be able to complete synchronization in a short time. This may still take a couple of hours. Once it has completed synchronizing, a green check mark icon will appear in the lower right corner of the wallet.
Step 6 - Encrypt your wallet. This can be done within the wallet, under the Settings menu. Encrypting your wallet will lock it, requiring a password in order to send transactions.
Step 7 - Backup your wallet.dat file. This can be done from the File menu inside the wallet. This file contains the keys to the addresses in your wallet. You may wish to keep a secure copy of your password somewhere, too, in case you forget it or someone else (your spouse, for example) ever needs it.
You should back up your wallet.dat file again any time you create – or a Genesis transaction creates (see “staking” below) – a new address.
 

Types of Mining or Minting:

1.Can I mine Nexus?
Yes, there are 2 channels that you can use to mine Nexus, and 1 channel of minting:
Prime Mining Channel
This mining channel looks for a special prime cluster of a set length. This type of calculation is resistant to ASIC mining, allowing for greater decentralization. This is most often performed using the CPU.
Hashing Channel
This channel utilizes the more traditional method of hashing. This process adds a random nonce, hashes the data, and compares the resultant hash against a predetermined format set by the difficulty. This is most often performed using a GPU.
Proof of Stake (nPoS)
Staking is a form of mining NXS. With this process, you can receive NXS rewards from the network for continuously operating your node (wallet). It is recommended that you only stake with a minimum balance of 1000 NXS. It’s not impossible to stake with less, but it becomes harder to maintain trust. Losing trust resets the interest rate back to 0.5% per annum.
 
2. How do I mine Nexus?
As outlined above, there are two types of mining and 1 proof of stake. Each type of mining uses a different component of your computer to find blocks, the CPU or the GPU. Nexus supports CPU and GPU mining on Windows only. There are also third-party macOS builds available.
Please follow the instructions below for the relevant type of miner.
 
Prime Mining:
Almost every CPU is capable of mining blocks on this channel. The most effective method of mining is to join a mining pool and receive a share of the rewards based on the contribution you make. To create your own mining facility, you need the CPU mining software, and a NXS address. This address cannot be on an exchange. You create an address when you install your Nexus wallet. You can find the related steps under How Do I Install the Nexus Wallet?
Please download the relevant miner from http://nexusearth.com/mining.html. Please note that there are two different miner builds available: the prime solo miner and the prime pool miner. This guide will walk you through installing the pool miner only.
Step 1 - Extract the archive file to a folder.
Step 2 - Open the miner.conf file. You can use the default host and port, but these may be changed to a pool of your choice. You will need to change the value of nxs_address to the address found in your wallet. Sieve_threads is the number of CPU threads you want to use to find primes. Ptest_threads is the number of CPU threads you want to test the primes found by the sieve. As a general rule, the number of threads used for the sieve should be 75% of the threads used for testing.
It is also recommended to add the following line to the options found in the .conf file:
"experimental" : "true"
This option enables the miner to use an improved sieve algorithm which will enable your miner to find primes at a faster rate.
Step 3 - Run the nexus_cpuminer.exe file. For a description of the information shown in this application, please read this guide.
 
Hashing:
The GPU is a dedicated processing unit housed on-board your graphics card. The GPU is able to perform certain tasks extremely well, unlike your CPU, which is designed for parallel processing. Nexus supports both AMD and Nvidia GPU mining, and works best on the newer models. Officially, Nexus does not support GPU pool mining, but there are 3rd party miners with this capability.
The latest software for the Nvidia miner can be found here. The latest software for the AMD miner can be found here. The AMD miner is a third party miner. Information and advice about using the AMD miner can be found on our Slack channel. This guide will walk you through the Nvidia miner.
Step 1 - Close your wallet. Navigate to %appdata%\Nexus (~/Library/Application Support/Nexus on macOS) and open the nexus.conf file. Depending on your wallet, you may or may not have this file. If not, please create a new txt file and save it as nexus.conf
You will need to add the following lines before restarting your wallet:
Step 2 - Extract the files into a new folder.
Step 3 - Run the nexus.bat file. This will run the miner and deposit any rewards for mining a block into the account on your wallet.
For more information on either Prime Mining or Hashing, please join our Slack and visit the #mining channel. Additional information can be found here.
 
3. How do I stake Nexus?
Once you have your wallet installed, fully synchronized and encrypted, you can begin staking by:
After you begin staking, you will receive a Genesis transaction as your first staking reward. This establishes a Trust key in your wallet and stakes your wallet balance on that key. From that point, you will periodically receive additional Trust transactions as further staking rewards for as long as your Trust key remains active.
IMPORTANT - After you receive a Genesis transaction, backup your wallet.dat file immediately. You can select the Backup Wallet option from the File menu, or manually copy the file directly. If you do not do this, then your Nexus balance will be staked on the Trust key that you do not have backed up, and you risk loss if you were to suffer a hard drive failure or other similar problem. In the future, signature chains will make this precaution unnecessary.
 
4. I am staking with my Nexus balance. What are interest rate, trust weight, block weight, and stake weight?
These items affect the size and frequency of staking rewards after you receive your initial Genesis transaction. When staking is active, the wallet displays a clock icon in the bottom right corner. If you hover your mouse pointer over the icon, a tooltip-style display will open up, showing their current values.
Please remember to backup your wallet.dat file (see question 3 above) after you receive a Genesis transaction.
Interest Rate - The minting rate at which you will receive staking rewards, displayed as an annual percentage of your NXS balance. It starts at 0.5%, increasing to 3% after 12 months. The rate increase is not linear but slows over time. It takes several weeks to reach 1% and around 3 months to reach 2%.
With this rate, you can calculate the average amount of NXS you can expect to receive each day for staking.
Trust Weight - An indication of how much the network trusts your node. It starts at 5% and increases much more quickly than the minting (interest) rate, reaching 100% after one month. Your level of trust increases your stake weight (below), thus increasing your chances of receiving staking transactions. It becomes easier to maintain trust as this value increases.
Block Weight - Upon receipt of a Genesis transaction, this value will begin increasing slowly, reaching 100% after 24 hours. Every time you receive a staking transaction, the block weight resets. If your block weight reaches 100%, then your Trust key expires and everything resets (0.5% interest rate, 5% trust weight, waiting for a new Genesis transaction).
This 24-hour requirement will be replaced by a gradual decay in the Tritium release. As long as you receive a transaction before it decays completely, you will hold onto your key. This change addresses the potential of losing your trust key after months of staking simply because of one unlucky day receiving trust transactions.
Stake Weight - The higher your stake weight, the greater your chance of receiving a transaction. The exact value is a derived by a formula using your trust weight and block weight, which roughly equals the average of the two. Thus, each time you receive a transaction, your stake weight will reset to approximately half of your current level of trust.
submitted by scottsimon36 to nexusearth [link] [comments]

Here is a transcript from the Ripple Consensus Presentation (May 22nd)

https://www.xrpchat.com/topic/5203-ripples-big-demo-and-why-you-missed-the-big-deal/?do=findComment&comment=49659
MY TRANSCRIPTION... 0:19
PATRICK GRIFFIN: All right I think we're gonna get started. There's total capacity. People at the door - there's a little room over here inside. There's chairs here - there’s chairs over here don't be shy. All right in case you don't know this, you are in “XRP In Action,” a live demo and expert Q & A.
I’m Patrick Griffin [with] David Schwartz and Stefan Thomas. We've got an hour today. We'll walk you through, we’ll do a quick round of introductions. Stefan is going to do a demo. We have a self-guided Q&A where I basically tee up some questions for these guys that will all be softballs don't worry! Then we'll turn it over to you guys to ask questions for the technical experts. Maybe we'll do it the quick round of intros, starting with Stefan:
1:07 STEFAN THOMAS: Yeah so, my name is Stefan Thomas I am CTO with Ripple. Before Ripple I was involved with BitCoin for several years and now I work on the vision and technical direction for Ripple.
1:22 DAVID SCHWARTZ: My name is David Schwartz. I'm the chief cryptographer at Ripple. I’ve been working on Ripple since 2011 and public ledger tech. Before that I was working on cryptographic messaging systems and cloud storage for government and military applications.
1:35 PATRICK GRIFFIN: I am Patrick Griffin. I’m the head of business development. I don’t know why I’m up here, but there’s our CTO and our head of cryptography, but actually I think we are the, to be honest here, I think we are the, we are the one two and three first employees of Ripple. Well, two one and three. We've been here for quite some time and it's been a long journey. So why don't we first start off with the demo and I think I'll tee it up: This is a demo that demonstrates our technical our technology start of the inter ledger protocol, moving payments in and out of XRP and Stefan will do a better job of articulating what you are about to see.
2:22 STEFAN THOMAS: All right thanks Patrick. So here we're gathered to have a quick round table on XRP. I want to go through the demo pretty quickly so we can get to the actual discussion Q&A which I thin is the meat of this session. Basically, what we're trying to do at Ripple is we're trying to make money move like information. This has been our mission since day one, and it has never changed and so we're building a number of different technologies that all integrate to make this vision a reality. And so what we think about how information actually moves I think it's really it's really this chart that captures it.
So what's happened is that the cost of moving information has really declined over the last couple decades and very strongly so. And as a result the volume of information that’s been moving has exploded. And so, very often you know, our customers will be talking to me about, you know:
Oh are you focused on corporate payments? Are you focused on consumer payments?
I think what you have to realize is that we're somewhere down here in that curve and so you know when you say like two-thirds of all payments are corporate payments you're really talking about two-thirds of almost nothing. I think what we're focused on is this growth that you can create if you increase the efficiency of the system enough.
And so the way that we're kind of approaching that is we want to streamline the way that liquidity works today. So today you have 27 trillion dollars in float sitting around the world that is essentially there to facilitate real-time payments when the underlying systems are not real time.
3:59 STEFAN THOMAS: So, for instance, I swipe my credit card somewhere there has to be an actual creditor or money available to pay that merchant if that's supposed to happen instantly if the underlying money can't move in real time. And so that's been the case ever since we were using gold and fiat currencies in order to move money internationally, but with digital assets there's actually opportunity to improve upon that and actually move real assets in real time.
So if you have something like XRP you don't need to pre-fund float all around the world. You can actually just have this digital asset and if you want to transfer value to somebody, you want to transfer value internationally, you can just transfer that asset and that moves instantly okay?
4:40 STEFAN THOMAS: So that's really the improvement. So with that I want to give you sort of a case example in a demo. This is something that already happens on blockchains today where there are money sources business that are using, businesses they're using block chain in order to move funds so they might sort of offer this as a service to small and medium businesses where if I want to let's say pay somebody in a different country I can go to one of these companies and they will move that money for me.
5:09 STEFAN THOMAS: So, in this example, we're kind of pretending that we're a publisher, we have a reporter in the field. and we’d like to pay them. And so, you know we don't really build apps, but we enable banks and other money service businesses to build apps on top of our platform. So this is kind of a mock-up that we’ve developed where, you can imagine, this would be just built into the the particular app of that company. And so I can basically pick any amount, so let’s say I want to send, say $7, and what happens is that you can see is that amount updates so what happens during that time is that we actually try to find the cheapest path from where the sender is to which are provided at the recipient uses and then once we found that cheapest path, we figure out what the exact cost is going to be, so we have that transparency upfront. What is the cost of this payment and this is all powered by the open source protocol InterLedger. Now, when I send this payment, it goes through right away. I don't have to wait for a ton of confirmations and so on.
6:11 STEFAN THOMAS: So let's talk a little bit about what is happening there in the background. So first, we basically look at the topology of the network and then we try to find a path. So say it found a path through XRP. Once we select the path, we basically send a code request to figure out what we think that cost is going to be and then we send the money through in two phases as per InterLedger Protocol, and that's enabled on XRP using a feature called escrow that we just launched earlier this year and so now XRP is it's fully InterLedger enabled.
6:50 STEFAN THOMAS: So, if we look at the kind of a cost calculation, this is kind of some fictional numbers but it's correct in terms of order of magnitude, right. So you have Bitcoin, you have Theory, we have XRP, we have Swift, and so our algorithm basically goes in and it tries to select the best option and so people often ask me like why does InterLedger help XRP? or why are you guys working on InterLedger as a completely neutral protocol when you actually have this vested interest in XRP?
7:18 STEFAN THOMAS: Well, because the reason is that XRP is right now by far the best digital asset but it's not being used as much as Bitcoin, for instance, and so in order to close that gap we want to get to a point where the selection of asset is kind of automated and you have algorithms to just pick the best one in which case, right now, XRP would get picked all the time. So that's why we have such a vested interest in just enabling more efficient selection. All right. So as you can see, it's the lowest fee right now and it’s the fastest turn right.
7:48 STEFAN THOMAS: Now, going a little bit further into the future, I was kind of talking about that huge explosion in volume and I think where that comes from is completely new user inter faces that we don't necessarily think about today. So one example would be, you have something like a publisher and a reader and a reporter and the reader is actually browsing an article and they're not having to sign up and go through a paywall in order to do that Their browser just pays them on their behalf automatically and then as a publisher I can see the money sort of coming in, in real time as users are browsing my website. And so you're basically providing the sort of metered access to your content. There's just one example. I think there's a lot of cases of APIs and other parts the industry that could benefit from micro-payments as a more granular way of transacting. So I don't have time to talk about that, but with that I hope you've got sort of a taste of both what XRP looks like today as well as what the future holds in terms of doing micro payments through payment channels, and so on, on InterLedger. So with that, I'll hand it over to Patrick to start the discussion.
9:00 PATRICK GRIFFIN: Very cool. So maybe it’s worth stepping back and also looking at our company strategy and having a conversation around what it means when we talk about an Internet of Value, which I think well this is a Silicon Valley company and for most people that doesn't mean a whole lot so maybe we can take a first stab at trying to explain what is an Internet of Value and Stefan, I’ll start with you. Actually, why don’t we start with David and give you a break.
9:24 DAVID SCHWARTZ: Yeah, so what is the Internet of Value and what are we working on? Well, the Internet has brought connectivity to billions of people around the world. They have smart phones. They have easy access to the movement of information but money is still siloed. It's still trapped in systems that don't talk to each other. Moving payments are expensive. They're slow. There's high friction. There's trillions of dollars that moves across borders and that's moved mostly by financial institutions, and we need to move that money more efficiently. We need to know where it is. We need to improve that flow.
10:02 DAVID SCHWARTZ: I don't know if any of you have made international payments or most of you have on traditional systems and you know that it's very hard to know where that money is. It’s very hard to know how much it's going to cost you ahead of time. The user experience is not great. A significant fraction of those payments fail. It takes several days. It's almost easier to ship money than it is to use our existing payment system. So we want to provide an Internet of Value where there is instant payment. Payment on demand, without failure. When you know ahead of time how much money is going to deliver. You know what path is going to take and because that transaction is set up using modern internet protocols you know ahead of time exactly what the requirements are at the destination so you don't have a failure because you didn't have the right information at the beginning.
10:45 STEFAN THOMAS: Yeah so um whenever I think of the Internet of Value, I think the number one thing that happened with the internet was that it kind of commoditized reach. So, before the Internet, if you wanted to be an online service provider like AOL or CompuServe the number one thing that you needed to have in order to be competitive is a lot of users. And if the main thing you're competing over is just having a lot of users it's very hard to get into that market for obvious reasons because you start out with zero users so how do you attract the first couple? But once you have something like the internet where all the different networks are actually tied together, suddenly the number of users you have is completely irrelevant, right? Because all of the networks are tied together you can reach all the websites, you can email all the people on the internet and so the competition has to be about something else and what does it become about? It becomes about about the efficiency of the system.
11:35: STEFAN THOMAS: And so, this fundamental transition has not happened with money yet. Like right now the the biggest consumer payment systems are things like Visa and MasterCard and they're very much competing on: We’re the biggest. We have the most merchants. We have the most customers, and so how are you going to compete with us, right? We would not even have to try to be efficient, necessarily, right? Because we're only competing with each other. It's very hard to get into that market, and so what we're trying to do with InterLedger, by creating an internet working protocol we're allowing you to go across multiple hops across multiple steps through the financial system and as a result you can tie a lot of smaller providers, a lot of smaller banks together and as a result make a system that’s much more competitive.
12:15: PATRICK GRIFFIN: I’ll just add my two cents in. I when I talk about the Internet of Value with customers it's typically the conversation on the cost and opportunities and for us you know, one of the analogies it's overused in the internet I think the Internet of Value, at least for me, is the function of bringing the marginal cost of payment processing down to as close to zero as possible. Now you can do that in one of two ways: Lower the cost of payment processing. Just for the sake of conversation these two things are 50/50. Payment processing: the messaging going between institutions and the cost of reconciling transactions as they go from one siloed network to another siloed network. Those are huge costs that the system currently bears just as a function of tracking down lost payments or fixing mistakes and broken transactions.
13:00 PATRICK GRIFFIN: Something like 12% of all international wires fail. That is an astonishing number if you come from Silicon Valley where you're typically used to five nines of reliability. The financial system isn’t working even with one nine of reliability. The other side of the equation so that it’s a processing function. We are able to achieve better processing by starting that sort of settlement layer, it’s a little bit academic, but then ultimately what our customers are buying from us today is just a payment processing capability.
13:30 PATRICK GRIFFIN: The second stool, leg of the stool, if you will, this two-legged stool, for this Internet of Value, is liquidity. And this iquidity cost is a huge component of the payments that infrastructure today. And so, when you think about the cost that you pay when you wire money internationally, it's not just processing costs and fees. Banks and financial institutions and payment processors have to cover their cost of capital. They are laying out a massive amount of cash in different overseas accounts to make sure that when you send a payment to Japan there's cash on hand in Japan to service your payment.
14:05 PATRICK GRIFFIN: The whole visual that we saw here with XRP that's really where we see there being a large opportunity to bring the liquidity costs down if you can fund your payment instantly on demand without pre-floating cash or opening up credit lines with your counter-parties you can really bring down this component of that cost so those two things together in my mind at least that's that is what really comprises the internet of value. You tackle those two things: processing and liquidity really starts to open up and level the playing field. And on leveling the playing field maybe a question back to you Stefan is and a little bit about the strategy so as we go out and roll out these new APIs for bank to bank or financial institution processing, this narrative around using the digital assets upon payment certainly there's no reason why you couldn't insert Bitcoin in there or Etherium or some other digital assets do you view this as maybe leveling the playing field for all digital assets and creating an opportunity for other digital assets to come in and basically compete for that case?
STEFAN THOMAS: 15:12 Yeah so, we definitely look at it as as a way to create more competition I think that I'm just looking at the market today, most of the digital assets out there are not really designed for enterprising spaces, right? There they're coming from a background of direct to consumer use. They're kind of designed in a way that maybe isn't always necessarily totally in line with how regulators think about the financial system and as a result it’s quite difficult for companies to use these assets, so I think maybe some of people in the room are Bitcoin entrepreneurs and so you may know some of these struggles and you know some of these difficulties of using an asset like Bitcoin. I think you know me, speaking as CTO, more from the technical side, there are definitely big differences between the different digital assets, and so if you look at things like settlement speed on Ripple you get below four seconds most of the time four seconds on average. On Bitcoin you have to wait nine minutes between just to get one confirmation.
16:14 STEFAN THOMAS: There's things like finality. On Ripple when you get one confirmation you can hundred percent trust it, it cannot get reversed because the set of validators that are known so it can't be some validator you've never heard of suddenly coming up with a different answer. Whereas on Bitcoin, there can always be a longer chain that you just haven't heard of yet so you have to wait for multiple confirmations to gain more confidence. Another difference is that you know Ripple is non-deterministic and so bitcoin is is random so what that means is that the actual delay between blocks on Ripple is pretty consistent. It's four seconds with the standard deviation of 0.8 seconds so it's almost always exactly four seconds. And so, with Bitcoin it's more variable, right? So you could have a block after a minute. You can have a block after half an hour. And so, it's much harder for businesses to kind of rely on a system that has that high variability because it increases your risk as you holding an asset.
17:12 STEFAN THOMAS: So these are just some examples of why we think that XRP is best suited for payments use cases. And I think I'll give, be giving a talk later today on on going into a bit more depth on some of these differences
17:28 DAVID SCHWARTZ: And and we're not afraid of a level playing field. As Stefan said we think we can succeed on a level playing field but also you can get people to build a level playing field. It's very hard to get other people to stand behind something that has a built-in bias in favor of one company. Twitter doesn’t, it doesn't mind the fact that the internet wasn't built for Twitter. Facebook doesn't mind. They like the fact that there's an open platform that everybody can support and use and they're willing to compete on that level playing field and if they lose on that level playing field you know, so be it, somebody else will win and the world will be a better place for it. We believe that we have the advantages today and we believe that we can get the industry behind an open standard that facilitates these types of instantaneous payments.
18:07 PATRICK GRIFFIN: So David, this is a question coming back to you. In this level playing field obviously there are digital assets can compete on different characteristics. Obviously I think that Bitcoin as scalability challenges have been I think very famous recently could you comment a little bit on Bitcoin’s recent lows some of the things that have come up around resiliency scalability and maybe draw a contrast to XRP and how XRP is working.
18:32 DAVID SCHWARTZ: Sure. I think the idea that you don't need governance. The idea that you can just have this decentralized system that magically government itself doesn't really work. The internet is a decentralized system it has governance. Bitcoin currently is experiencing a little bit of a governance failure due to with dis-alignment of incentives. Historically the minerss have had an incentive to keep the system working. Everybody needs the Bitcoin system to work, whether you hold, whether you try to do payment’s, whether you're mining. This system has to work or nobody has anything. Everybody's benefited from the value of Bitcoin going up. If you’re a miner, you want the value to go up. If you hold Bitcoin, you want the value to go up. If you're using it for payments having more liquidity and lower risk and holding bitcoins is good for you.
19:11 DAVID SCHWARTZ: So everybody's incentives were aligned. They're starting to become dis-aligned recently because miners have been getting a lot of revenue from transaction fees Miners like high transaction fees. Users obviously would prefer to pay less for their payments. People who want to use Bitcoin as a payment platform want frictionless payments and they're not getting them because of the fees. So there's been a little bit of a governance breakdown due to that misalignment of incentives and it's not clear how you resolve that. It's not really clear how the stakeholders can realign their incentives.
19:39 DAVID SCHWARTZ: I’m confident that Bitcoin will come out come through it but I think it shows that governance is important. You should understand how a system is governed whatever system it is because there is going to have to be governance. It’s not going to magically govern itself. Now Ripple, the stakeholders are the validators and the validators are sort of chosen by the other validators, so right now Ripple is obviously very big in that space. We’re the major stakeholder on the network, but the recent interest into the price increase has begun diversifying the stakeholders and so we hope to see different jurisdictions, different companies and those will be the people who will be the stakeholders and they'll make the decision if there are going to be changes in the rules behind in that market. We think that that will work better and I think if you, once you accept that there has to be governance, you really want it to be the people who are using the network. You don't want the technology to force you into having other stakeholders whose interest may be adverse to the people who just want to use the system to store value and make payments.
20:32 PATRICK GRIFFIN: So what stuff, I mean do you have anything to add just in terms of the underlying design of the systems and how they're confirming transactions? I think when you go way way way back to our company's beginning it was billed as Bitcoin 2.0. And you know we felt like there was another way you could build a decentralized digital asset without without mining. So maybe talk a little about the confirmation engine behind XRP and some of its advantages over other systems
21:04 STEFAN THOMAS: Yeah, so as I mentioned in the introduction, I was fairly involved in the in the Bitcoin community back in 2010-2011 and one of the features that I contributed to was paid to script hash as a reviewer it was one of the first people to re-implement Bitcoin and I pointed out some flaws and you know we ended up with a much better solution. And so, through that experience going through the cycle of new feature on Bitcoin, even back then when the committee was much smaller I realized that it was actually very painful to do even a uncontroversial improvement to the system and that was partly because people had a very strong tendency to be conservative which is a good thing, for any, like whenever you're modifying a live system. But there was also just like no good process for introducing changes.
22:00 STEFAN THOMAS: We had to come up with a process ad hoc. We came up with this whole voting on mining power and so on. Now, from that experience I remember going back to a wiki page on the big part of working called the hard fork wish list and I kind of looked at and is sort of the list of things other things that we wanted to do and a lot of them were in my opinion, in my humble opinion, must haves for any kind of mainstream or enterprise adoption and so I was kind of like putting numbers next to them like this would take eight months this would take 12 months this would take two years and it started to add up like I'm not going to see this get to that point if we go at this rate.
22:38 STEFAN THOMAS: And then you know Ripple approached me and they had a lot of that hard fork wish list already implemented but maybe more importantly they had a different idea on the governance structure and I think there's sort of two key differences: The first key difference is there is an entity that's actually funding the development of the asset and all the technology behind the asset. And so you know, I was looking at the Bitcoin foundation website the other day and they're currently, their most recent blog post is to promote this lawsuit in New York to try to strike down the bit license and apparently the foundation feels that it's strategically important for Bitcoin to kind of fund this lawsuit and they looked at how many people had actually donated to the donation address that they were giving and it was just over a thousand dollars basically. Almost nothing
23:31 STEFAN THOMAS: And I was thinking like well if XRP you know had any strategic issue like that there would be millions of dollars immediately that just Ripple would put behind the issue and so as a holder of the asset that's really important for me to know that, you know, there is some some entity that's actually defending it from a technical standpoint, from a legal standpoint, from a business standpoint. That makes a big difference
23:53 STEFAN THOMAS: And then the second big difference that I saw was how features and how generally the evolution of the technology is managed. So on Ripple, there's voting among the validators, which is not too dissimilar from you know the kind of mining voting that we're doing on Bitcoin. However the validators on Ripple are largely chosen by the users or they are chosen by the users. And so they're not chosen by so this algorithm or just by their virtue of being very efficient in mining. And so as David pointed out earlier, the incentives are very different. On Ripple, the incentives are you know I want the people who are appointing me to be validators to be happy with my validations because otherwise you know there's what they will stop paying me. And so you know there's a much more closely aligned incentive for the value of some Ripple to do what the actual users want to do.
24:46 DAVID SCHWARTZ: And I would add that there there are sort of vulnerabilities in both types of systems. Like with the miners, it would be a double spend. With the validators, they could simply stop validating and the network would halt, but one tremendous difference is that you know how to fix one and it's not clear how you would fix the other so if you had the miners that were being pressured, let's say by a friend in government, or they were double spending or for whatever reason they are holding transaction fees high, let's say the block size issue got to the point where it was absolutely critical and there was no ability to come up with an agreement. It's not clear how you solve that. You change the mining algorithm? Like that's the nuclear option? Nobody knows what you do. With the system on consensus it is clear what you do. You can, you can change the validators. The validators work at the pleasure of the users, the holders, the real stakeholders of the network.
25:33 DAVID SCHWARTZ: That, I think that is a fairly significant advantage once you realize how important governance is. And it's not just a handle of failure as Stefan pointed out there's going to be evolution of the system unless you think the systems are absolutely perfect today. Well bitcoin is already proven that there they're not absolutely perfect today. I can’t, I certainly wouldn't try to claim the Ripple is perfect today. We have a wish list of features too, limited by engineering time, but we have to get people to agree to implement those features and I think that's also an argument why you can't have one blockchain to rule them all. There are features that also have costs and every feature has a cost because if you have a public blockchain everybody that uses that public blockchain, at a minimum, when there's a new feature they have to do a security review and make sure that that feature doesn't create a vulnerability for them. So there's a fixed cost that's fairly high. There's a huge bug bounty on Bitcoin and on Ripple right? Billions of dollars if you could steal money on the system. So the cost to implement a feature is high. So if there's a feature that somebody really wants it would be really useful for them they're probably not going to get that's not enough to get any feature on the system, so you're going to have a diversified system of multiple block chains and multiple ledger systems of all kinds competing with each other for share. that's why I think InterLedger is important because InterLedger will permit people who use different block chains and different systems, for good reasons, to be able to make payments to each other quickly seamlessly and without the risk associated with little pays problem.
26:53 PATRICK GRIFFIN: hmm Maybe just a last question before we turn it over to the audience and you've mentioned InterLedger. Stefan is the creator of InterLedger or the chief architect of it. When you walk around the conference today, you'll see a lot of companies that have blockchain offering. So, sort of going back to 2014, now if you remember, the the terminology and the marketing was all about it's not about Bitcoin it's about the blockchain. And so now we have some sound perspective on that. What's your take on the fundamental premise of a de-centralized distributed database without a digital asset and what's the trade-offs in terms of functionality versus utility? What's your opinion given the architecture IOP.
27:42 STEFAN THOMAS: Well that's a question I could easily spend hours on, so let me try to summarize. So as you mentioned, my colleague Evan Schwartz and I, we we came up with this protocol InterLedger and that came out of actually in a couple of different work streams but one in particular I remember was I was trying to figure out how to make Ripple more scalable and I was thinking about a particular kind of scalability which is similar to what David just mentioned, which was scalability in terms of functionality not just in terms of how many transactions can you do per second. Like how do I serve very different use cases that have you know mutually conflicting trade-offs. So as I was thinking about that problem I was kind of saying well maybe you don't even have to keep that one set of global state. Maybe you can have state in different places and a lot of that is honestly just rediscovering database knowledge that we've had since the 70s. Now just looking at Jim Gray's papers and just oh yeah that works for blockchains too
28:41 STEFAN THOMAS: So we took those ideas and we combined them with ideas around from the internet from the internet background in terms of networking and the concept of internet working and so on. And so, when I look at these private blockchains type approaches I think they are doing the first of those two steps namely they're applying sort of modern data, modern database thinking or classical database thinking to blockchain but I don't think they're really applying the Internet thinking yet because they're if they're attempting to achieve interoperability just by homogeneity which does not give you that diversity of use cases and so if you want that you have to think about what are the simple stateless protocols they can actually tie these different systems together without dictating how they work internally. So I can have my private blockchains that has all these like special features and it works in this way and you can have your private box and it works in the other way but we can still talk through a neutral protocol and you know the way that we're thinking about InterLedger, we're not married to InterLedger being a thing like I'm completely happy if it's lightning or if it's something else but I think as an industry to agree on some kind of standard on that layer.
29:51 STEFAN THOMAS: I think one of the reasons that we can is because unlike a blockchain a standard is neutral you know there's no acid anyone's getting rich off of. There's no there's a lot less to agree on. The list of decisions you have to make is a lot shorter. You know my colleague Evan, he makes a point, a very good point about with InterLedger only like seven eight major decisions that you have to make in the architecture to really arrive at it and so I think we have really good reasons for each one of them and so we think that there will be a certain convergence on on one standard protocol for again not just blockchains, but like any kind of ledger.
30:26 DAVID SCHWARTZ: I just ant to add that InterLedger is completely neutral to how the ledger works internally. Any ledger that can support a very short list of very simple operations. Every banking ledger can perform those operations. Almost anything the tracks ownership of value of any kind is capable of confirming that value exists, putting that value on hold, transferring that value between two people and those are the only primitives that InterLedger builds on. It's just by the clever combination of those operations in a way that provides insurance that all of the stakeholders get out of the transaction the thing that they're supposed to get out and get back whatever they were going to put in if they don't get out what they're supposed to get out. It’s, it's astonishingly simple at the protocol level.
31:08 PATRICK GRIFFIN: Okay, with that I will turn it over to the room for questions and some Q&A Aany questions in the back?
QUESTION: Yeah, I’m kind of new to this and I just have some really basic questions. I read something recently where, Ripple was now the second most funded, or invested. Bitcoin was first, and Etherium was third. Can you tell me how you got to that position? You seem like you’re poking up about Bitcoin and how Ripple probably is more efficient and better. Then I had a second question - Where do I get a Ripple T-Shirt?
32:06 PATRICK GRIFFIN: The first question is how did, how did we get to this position we're in and does that generally capture the essence of that question and then Ripple t-shirts I'm not sure about that (Come work for us!) I will attempt to answer the first question and if you guys want to jump in. I think that is a function of one: Silicon Valley companies do one thing I think very well, they pick a lane and they go deep on it. For us, what we've been very very focused on it the use case. as a company we but we picked a long time ago to go deep on cross-border payments and in particular wholesale cross-border payments that’s financial institution to institution. It’s at the enterprise level and so when we look at digital assets today we think that there is a very very very use case around the consolidation of capital to fund payments overseas, which is exactly what we just demonstrated. Being able to transfer an asset from a server in one country to a server in another country and basically allow for payments companies to operate with much less capital deployed overseas. It's a, it's a quantifiable use case. Today there's 27 and a half trillion dollars in float in the banking system just wait sitting idly waiting for payments to arrive. That's compounded when you go to look at corporates and you look at payment service companies. So there's a very very very very very big number and I think that the recent traction that we've gotten has been an acknowledgement of the use case how it fits into our overall product offering. Ssome of the technical benefits of XRP itself and then when you look around, I mean I think that its head, you're hard-pressed to find another digital asset with as clearly articulated the use case that where the time horizon is now. I think there's lots of really exciting things going on in IOT and device-to-device payments and sort of the future some of things that I that Etherium people talk about for example, but it still feels like it's still at the horizon and I think this is being deployed today. There is a a path to commercial production and ultimately I think that's part of the reason why we're getting some traction.
34:18 DAVID SCHWARTZ: I think we also sort of crossed an important threshold. If an asset doesn't have value and it doesn't have liquidity you can't really use it even if it has the properties that are perfect for your use case simply because you can't you can't get enough of it without moving the market and I think we crossed a threshold (not the end) -
*use the link above to view the entire transcript.**
submitted by ripcurldog to Ripple [link] [comments]

Electroneum Blockchain Update May 30th & Divergence from Monero

This is a big announcement for Electroneum. See bottom of this article for the full details of the blockchain tech update. Some of our community have been worried about our blockchain tech, especially regarding ASIC miners and blockchain flooding. We will cover these below along with the announcement of some exciting major changes to the way our blockchain runs. ETN Blockchain Update May 30th Monero (who we based our blockchain code on) perform an update approximately every 6 months, and this is a great practice, as it allows them to keep their technology moving forward and introduce new features. We will be following this model and our first major update (also known as a fork) is scheduled to take place at block 307500 which is approximately 10.30am BST on May 30th. Don’t panic! – Forking explained It’s important for everyone to understand that whilst this is known as a fork, it is very different to Bitcoin forking to bitcoin cash or bitcoin gold. The fork will not result in two currencies, as all the exchanges and pools will update their software in advance of the update block and Electroneum will continue with an updated blockchain. Time to test and implement The end of May gives our community plenty of time to test and comment on the code changes that we will post on GitHub by Friday 5th of May. It also gives plenty of time for every node owner to update their Electroneum nodes, ready for the update block. Electroneum divergence from Monero We’ve always planned to move the Electroneum blockchain further towards reaching our goals, which in turn will move us away from Monero’s goals. We chose Monero because they’d written an awesome dynamic blocksize algorithm, but they also have some features that are not critical to Electroneum’s future. In this Electroneum update we’ve started to diverge away from some core Monero functionality. As we move towards a lean, fast blockchain to handle vast numbers of micropayments we need to lose some of the overhead that comes with the privacy of Monero. We are still going to be far more private than Bitcoin or Ethereum (for instance you won’t be able to look at someone else’s wallet balance), but by decreasing some of the privacy features we can fit significantly more transactions into a block, which is critical for our next stage of growth as our corporate partners start to bring on user numbers, and our vendor program starts delivering instant cryptocurrency acceptance into online and physical shops and stores. In short Monero is the best privacy coin in the world, where we need to be the best micropayment system in the world. ETN Blockchain Tech Update (Details) Anti- ASIC. An ASIC is a computer chip that has been made for a specific task. In this instance the task is to mine the CryptoNight algorithm that Electroneum uses. We are implementing Anti-ASIC code to ensure we have maximum resistance to any network attack that could occur in the future. Limiting mining to GPU’s reduces the chances of a single entity possessing enough hashing power to attempt a 51% network attack. It’s important to note that there is no proof of a 51% network attack having taken place on the Electroneum blockchain. Transfer Fee Increase. There have been a lot of comments about our transfer fee being too low. It is important to our project that the fee remains low, because we are going to be focusing on instant payments and instant micropayments in the real world, and we need fees that are lower than typical debit / credit card fees. However, we have suffered from blockchain flooding so are taking steps to ensure we are resistant to this in the future. We have therefore decided to increase our base fee to 0.1 ETN. This is still a fraction of the cost of transfer of other cryptocurrencies, but still increases the difficulty of flooding by an order of magnitude. Combined with our other updates (below) this will give us more effective resistance to blockchain flooding. Increase block size before penalty. We have been enormously successful and seen some periods with huge amounts of legitimate blockchain transaction traffic. This, combined with blockchain flooding, has meant periods of blockchain delays. By increasing the block size before penalty, miners will be able to scale the blocks faster and get more transactions into a block. This will handle regular transactions and flood transactions, making delays less likely. Combined with the Fee increase this is a significant resistance enhancement to flooding. Disabling of RingCT & Mixin. RingCT was introduced by Monero whose main focus is privacy. Our main focus is bulk transactions for a mass audience, and thus we are disabling some of the privacy features of the blockchain. Disabling some privacy features means we can fit significantly more transactions into each block than with them enabled. This means less wait to get a transaction into a block and a leaner blockchain size. Wallets are still private as we will continue to use a new stealth wallet address for every blockchain transaction so there is still significantly more privacy than with Bitcoin or Ethereum, but considerably less privacy than with a privacy focused coin like Monero. Mempool life to 3 days. During high transactional volumes it is feasible that a transaction can remain in the mempool for 24 hours and reach the current limit. This would mean the transaction is returned to the sender, but that could take up to 7 days. By increasing the mempool life to 3 days (and in conjunction with some of the additional changes) we are ensuring a significant reduction in the possibility of these returned transactions. 2 minute blocks. Blocks are currently mined every minute. We are moving to two minute blocks which will significantly decrease the chance of an orphan block being created. Orphan blocks might contain transactions which will eventually (7 days) be returned or added to another block. Increasing the block time to 2 minutes has ramifications on the block reward which will be modified (see below). Block Reward. We are doubling the block reward to ensure daily ETN block reward remains the same, despite the fact that we are releasing blocks at half the current speed. This means there will be no discernable effect to miners or pools. Reduce difficulty window. Block difficulty window is being reduced. The block difficulty is calculated by looking at the last X blocks. It has come to our attention that by hitting the ETN blockchain with large powered rented hashing power gives the miner an advantage over a short period of time (until the difficulty algorithm catches up with the new hashing rate). We are reducing the difficulty window to reduce the benefit these periodic miners have and to discourage this practice, making the mining process fairer. This should have little or no effect on the difficulty number itself except during the exceptional circumstances described. Thanks for taking the time to read this update! If you are running an Electroneum node remember to update before May 30th. If you are using a pool, ensure you let their telegram or other social channel know that this update is critical and must be applied before May 30th, in advance of block 307500. Have a great day everyone, Richard
submitted by Crypto_Manila to CryptoCurrency [link] [comments]

The PHANTOM Technical Whitepaper

1.PHANTOM BLOCKCHAIN Cryptocurrencies and smart contract platforms are becoming a shared computational resource. One could view these platforms as a new generation of computers that synchronize over thousands of individual computers. However, existing cryptocurrencies and smart contract platforms have widely recognized limitations in scaling. Average transaction rates in Bitcoin , Ethereum , and related cryptocurrencies have been limited to below 10 (usually about 3-7) transactions per second (Tx/s). As the number of applications utilizing public cryptocurrencies and smart contract platforms grow, the demand for processing high transaction rates in the order of hundreds of Tx/s is increasing. A global payment network would likely require tens of thousands of Tx/s in capacity. Can we build a decentralized and open blockchain platform capable of processing at that scale? The limitations in scaling up existing protocols are somewhat fundamental — they are rooted in the design of the consensus and network protocols. Therefore, even though reengineering the parameters of the existing protocols in say Bitcoin or Ethereum (e.g., the block size or the block rate) may show some speedup, to support applications that need processing of thousands of Tx/s however requires rethinking the underlying protocols from scratch. We present PHANTOM— a new blockchain platform that is designed to scale in transaction rates. As the number of miners in PHANTOM increases, its transaction rates are expected to increase as well. Specifically, PHANTOM’s design allows its transaction rates to roughly double with every few hundred nodes added to its network. As of this writing, the Ethereum mining network is over 30,000 nodes. PHANTOM is a redesign from scratch and has been under research and development for over 2 years. The cornerstone in PHANTOM’s design is the idea of sharding — dividing the mining network into smaller consensus groups called shards each capable of processing transactions in parallel. If the mining network of PHANTOM is say 8000 miners, PHANTOM automatically creates 10 sub-networks each of size 800 miners, in a decentralized manner without a trusted co-ordinator. Now, if one sub-network can agree on a set of (say) 100 transactions in one time epoch, then 10 sub-networks can agree on a total of 1000 transactions in aggregate. The key to aggregating securely is to ensure that sub-networks process different transactions (with no overlaps) without double-spending. The assumptions are similar to existing blockchain-based solutions. We assume that the mining network will have a fraction of malicious nodes/identities with a total computational power that is a fraction (< 1/4) of the complete network. It is based on a standard proof-of-work scheme, however, it has a new two-layer blockchain structure. It features a highly optimized consensus algorithm for processing shards. PHANTOM further comes with an innovative special-purpose smart contract language and execution environment that leverage the underlying architecture to provide a large scale and highly efficient computation platform. The smart contract language in PHANTOM follows a dataflow programming style, where the smart contract can be represented as a directed graph. Nodes in the graph are operations or functions, while an arc between two nodes represent the output of the first and the input to the second. A node gets activated (or operational) as soon as all of its inputs become valid and thus a dataflow contract is inherently parallel and suitable for decentralized systems such as PHANTOM. The sharded architecture is ideal for running large-scale computations that can be easily parallelized. Examples include simple computations such as search, sort and linear algebra computations, to more complex computations such as training a neural net, data mining, financial modeling, scientific computing and in general any MapReduce task among others. This document outlines the technical design of PHANTOM blockchain protocol. PHANTOM has a new, conceptually clean and modular design. It has six layers: the cryptographic layer (Section III), data layer (Section IV), the network layer (Section V), the consensus layer (Section VI), the smart contract layer (Section VII) and the incentive layer (Section VIII). Before we present the different layers, we first discuss the system settings, underlying assumptions and threat model in Section II. 2. SYSTEM SETTING AND ASSUMPTIONS Entities in PHANTOM. There are two main entities in PHANTOM: users and miners. A user is an external entity who uses PHANTOM’s infrastructure to transfer funds or run smart contracts. Miners are the nodes in the network who run PHANTOM’s consensus protocol and get rewarded for their service. In the rest of this whitepaper, we interchangeably use the terms miner and node. PHANTOM’s mining network is further divided into several smaller networks referred to as a shard. A miner is assigned to a shard by a set of miners called DS nodes. This set of DS nodes is also referred to as the DS committee. Each shard and the DS committee has a leader. The leaders play an important role in the PHANTOM’s consensus protocol and for the overall functioning of the network. Each user has a public, private key pair for a digital signature scheme and each miner in the network has an associated IP address and a public key that serves as an identity. Intrinsic token. PHANTOM has an intrinsic token called Zillings or ZILs for short. Zillings give platform usage rights to the users in terms of using it to pay for transaction processing or run smart contracts. Throughout this whitepaper, any reference to amount, value, balance or payment, should be assumed to be counted in ZIL. Adversarial model. We assume that the mining network at any point of time has a fraction of byzantine nodes/identities with a total computational power that is at most of the complete network, where 0 ≤ f < 1 and n is the total size of the network. The factor is an arbitrary constant bounded away from selected as such to yield reasonable constant parameters. We further assume that honest nodes are reliable during protocol runs, and failed or disconnected nodes are counted in the byzantine fraction. Byzantine nodes can deviate from the protocol, drop or modify messages and send different messages to honest nodes. Further, all byzantine nodes can collude together. We assume that the total computation power of the byzantine adversaries is still confined to standard cryptographic assumptions of probabilistic polynomial-time adversaries. We however assume that messages from honest nodes (in the absence of network partition) can be delivered to honest destinations after a certain bound δ, but δ may be time-varying. The bound δ is used to ensure liveness but not safety . In case such timing and connectivity assumptions are not met, it becomes possible for byzantine nodes to delay the messages significantly (simulating a gain in computation power) or worse “eclipse” the network . In the event of network partition, as dictated by the CAP theorem, one can only choose between consistency and availability . In PHANTOM, we choose to be consistent and sacrifice availability. 3. SMART CONTRACT LAYER PHANTOM comes with an innovative special-purpose smart contract language and execution environment that leverages the underlying architecture to provide a large scale and highly efficient computation platform. In this section, we present the smart contract layer that employs a dataflow programming architecture. A. Computational Sharding using Dataflow Paradigm PHANTOM’s smart contract language and its execution platform is designed to leverage the underlying network and transaction sharding architecture. The sharded architecture is ideal for running computation-intensive tasks in an efficient manner. The key idea is the following: only a subset of the network (such as a shard) would perform the computation. We refer to this approach as computational sharding. In contrast with existing smart contract architectures (such as Ethereum), computational sharding in PHANTOM takes a very different approach towards how to process contracts. In Ethereum, every full node is required to perform the same computation to validate the outcome of the computation and update the global state. Albeit being secure, such a fully redundant programming model is prohibitively expensive for running large-scale computations that can be easily parallelized. Examples include simple computations such as search, sort and linear algebra computations, to more complex computations such as training a neural net, data mining, financial modeling, etc. PHANTOM’s computational sharding approach relies on a new smart contract language that is not Turing-complete but scales much better for a multitude of applications. The smart contract language in PHANTOM follows a dataflow programming style , . In the dataflow execution model, a contract is represented by a directed graph. Nodes in the graph are primitive instructions or operations. Directed arcs between two nodes represent the data dependencies between the operations, i.e., output of the first and the input to the second. A node gets activated (or operational) as soon as all of its inputs are available. This stands in contrast to the classical von Neumann execution model (as employed in Ethereum), in which an instruction is only executed when the program counter reaches it, regardless of whether or not it can be executed earlier. The key advantage of employing a dataflow approach is that more than one instruction can be executed at once. Thus, if several nodes in the graph become activated at the same time, they can be executed in parallel. This simple principle provides the potential for massive parallel execution. To see this, we present a simple sequential program in Figure 1a with three instructions and in Figure 1b, we present its dataflow variant. Under the von Neumann execution model, the program would run in three time units: first computing A, then B and finally C. The model does not capture the fact that A and B can be independently computed. The dataflow program on the other hand can compute these two values in parallel. The node that performs addition gets activated as soon as A and B are available. When run on the PHANTOM’s sharded network, each node in the dataflow program can be eventually attributed to a single shard or even a small subset of nodes within a shard. Hence the architecture is ideal for any MapReduce style computational tasks, where some node perform the mapping task while another node can work as a reducer to aggregate the work done by each mapper. In order to facilitate the execution of a dataflow program,, PHANTOM’s smart contract language has the following features: 1)Operating over a virtual memory space shared globally across the entire blockchain. 2)Locking of intermediate cells in the virtual shared memory space during execution. 3)Checkpointing intermediate results during execution committed to blockchain. B. Smart Security Budgeting Apart form the benefits of parallelization offered from the dataflow computation model, PHANTOM further provides a flexible security budgeting mechanism for computational sharding. This feature is enabled by sharding the computational resources in the blockchain network via an overlay above the consensus process. Computational sharding allows users of PHANTOM and applications running on PHANTOM to specify the sizes of consensus groups to compute for each of the subtasks. Each consensus group will then be tasked to compute the same subtask, and produce the results. The user specifies the condition on acceptance of the results, e.g., all in the consensus group must produce the same results, or 3/4 of them must produce the same results, etc. A user of the application running on PHANTOM can budget how much she wants to spend on computing and security, respectively. In particular, a user running a particular deep learning application may spend more gas fee on running more of different neural network tasks than letting too many nodes repeating the same computation. In this case, she can specify a smaller consensus group for running each neural network computation. On the other hand, a sophisticated financial modeling algorithm that requires greater precision may task consensus groups of larger number of nodes to compute the critical portions of the algorithm to be more resilient against potential tampering and manipulation of the results. C. Scalable Applications: Examples PHANTOM aims to provide a platform to run highly scalable computations in a multitude of fields such as data mining, machine learning and financial modeling to name a few. Since supporting efficient sharding of Turing-complete programs is very challenging, and there exist public blockchains that support Turing-complete smart contracts (e.g., Ethereum), PHANTOM focuses on specific applications with requirements not met today. 1)Computation with parallelizable computation load: Scientific computing over large data is a typical example where one requires a large amount of distributed computing power. Moreover, most of these computations are highly parallelizable, examples include linear algebra operations on large matrices, search in the sea of huge amount of data and simulation on a large dataset among others. PHANTOM provides such computing tasks an inexpensive and short turnaround alternative. Moreover, with the right incentive in place with computational sharding, and security budgeting PHANTOM can be leveraged as a readily available and highly reliable resource for such heavy computation load. 2)Train neural nets: With the ever growing popularity and use cases of machine learning (in particular deep learning), it is imperative to have an infrastructure that allows deep learning models to train on large datasets. It is well known that training on large datasets is crucial to a model’s accuracy. To this end, PHANTOM’s computational sharding and dataflow language will be particularly useful to build machine learning applications. It will serve as an infrastructure that may run tools like TensorFlow[] by tasking groups of PHANTOM nodes to independently perform different computations such as computing gradients, apply activation function, compute training loss, etc. 3)Application with high complexity and high precision algorithms: Different from the applications mentioned above, some applications, such as computations over financial models, may require high precision. Any minor deviation in one part of the computation may incur heavy losses in investments. Such applications can task consensus groups of larger number of nodes in PHANTOM to allow them to cross-check the computational results of each other. The key challenge in offloading the computational tasks of such financial modeling algorithms to a public platform, such as PHANTOM, is the concern for data privacy and intellectual property of the algorithms. To begin with, we envision certain well known portion of such computation can be placed to PHANTOM for efficient and secure computation first, while the future research and development of PHANTOM will further strengthen the protection of data privacy and intellectual property for such applications. 4.Weights and more In this section we define the weight of a transaction, and related concepts. The weight of a transaction is proportional to the amount of work that the issuing node invested into it. In the current implementation of PHANTOM, the weight may only assume values 3n, where n is a positive integer that belongs to some nonempty interval of acceptable values[]. In fact, it is irrelevant to know how the weight was obtained in practice. It is only important that every transaction has a positive integer, its weight, attached to it. In general, the idea is that a transaction with a larger weight is more “important” than a transaction with a smaller weight. To avoid spamming and other attack styles, it is assumed that no entity can generate an abundance of transactions with “acceptable” weights in a short period of time. 4)One of the notions we need is the cumulative weight of a transaction: it is defined as the own weight of a particular transaction plus the sum of own weights of all transactions that directly or indirectly approve this transaction. The algorithm for cumulative weight calculation is illustrated in Figure 1. The boxes represent transactions, the small number in the SE corner of each box denotes own weight, and the bold number denotes the cumulative weight. For example, transaction F is directly or indirectly approved by transactions A,B,C,E. The cumulative weight of F is 9 = 3 + 1 + 3 + 1 + 1, which is the sum of the own weight of F and the own weights of A,B,C,E. Let us define “tips” as unapproved transactions in the tangle graph. In the top tangle snapshot of Figure 1, the only tips are A and C. When the new transaction X arrives and approves A and C in the bottom tangle snapshot, X becomes the only tip. The cumulative weight of all other transactions increases by 3, the own weight of X. We need to introduce two additional variables for the discussion of approval algorithms. First, for a transaction site on the tangle, we introduce its height: the length of the longest oriented path to the genesis; depth: the length of the longest reverse-oriented path to some tip. For example, G has height 1 and depth 4 in Figure 2 because of the reverse path F,D,B,A, while D has height 3 and depth 2. Also, let us introduce the notion of the score. By definition, the score of a transaction is the sum of own weights of all transactions approved by this transaction plus the own weight of the transaction itself. In Figure 2, the only tips are A and C. Transaction A directly or indirectly approves transactions B,D,F,G, so the score of A is 1+3+1+3+1 = 9. Analogously, the score of C is 1 + 1 + 1 + 3 + 1 = 7. In order to understand the arguments presented in this paper, one may safely assume that all transactions have an own weight equal to 1. From now on, we stick to this assumption. Under this assumption, the cumulative weight of transaction X becomes 1 plus the number of transactions that directly or indirectly approve X, and the score becomes 1 plus the number of transactions that are directly or indirectly approved by X. Let us note that, among those defined in this section, the cumulative weight is (by far!) the most important metric, although height, depth, and score will briefly enter some discussions as well.
Figure 1: DAG with weight assignments before and after a newly issued transaction, X. The boxes represent transactions, the small number in the SE corner of each box denotes own weight, and the bold number denotes the cumulative weight.
Figure 2: DAG with own weights assigned to each site, and scores calculated for sites A and C. 5.Splitting attack Aviv Zohar suggested the following attack scheme against the proposed MCMC algorithm. In the high-load regime, an attacker can try to split the tangle into two branches and maintain the balance between them. This would allow both branches to continue to grow. The attacker must place at least two conflicting transactions at the beginning of the split to prevent an honest node from effectively joining the branches by referencing them both simultaneously. Then, the attacker hopes that roughly half of the network would contribute to each branch so that they would be able to “compensate” for random fluctuations, even with a relatively small amount of personal computing power. If this technique works, the attacker would be able to spend the same funds on the two branches. To defend against such an attack, one needs to use a “sharp-threshold” rule that makes it too hard to maintain the balance between the two branches. An example of such a rule is selecting the longest chain on the Bitcoin network. Let us translate this concept to the tangle when it is undergoing a splitting attack. Assume that the first branch has total weight 537, and the second branch has total weight 528. If an honest node selects the first branch with probability very close to 1/2, then the attacker would probably be able to maintain the balance between the branches. However, if an honest node selects the first branch with probability much larger than 1/2, then the attacker would probably be unable to maintain the balance. The inability to maintain balance between the two branches in the latter case is due to the fact that after an inevitable random fluctuation, the network will quickly choose one of the branches and abandon the other. In order to make the MCMC algorithm behave this way, one has to choose a very rapidly decaying function f, and initiate the random walk at a node with large depth so that it is highly probable that the walk starts before the branch bifurcation. In this case, the random walk would choose the “heavier” branch with high probability, even if the difference in cumulative weight between the competing branches is small. It is worth noting that the attacker’s task is very difficult because of network synchronization issues: they may not be aware of a large number of recently issued transactions[]. Another effective method for defending against a splitting attack would be for a sufficiently powerful entity to instantaneously publish a large number of transactions on one branch, thus rapidly changing the power balance and making it difficult for the attacker to deal with this change. If the attacker manages to maintain the split, the most recent transactions will only have around 50% confirmation confidence (Section 1), and the branches will not grow. In this scenario, the “honest” nodes may decide to start selectively giving their approval to the transactions that occurred before the bifurcation, bypassing the opportunity to approve the conflicting transactions on the split branches. One may consider other versions of the tip selection algorithm. For example, if a node sees two big subtangles, then it chooses the one with a larger sum of own weights before performing the MCMC tip selection algorithm outlined above. The following idea may be worth considering for future implementations. One could make the transition probabilities defined in (13) depend on both Hx − Hy and Hx in such a way that the next step of the Markov chain is almost deterministic when the walker is deep in the tangle, yet becomes more random when the walker is close to tips. This will help avoid entering the weaker branch while assuring sufficient randomness when choosing the two tips to approve. Conclusions: We considered attack strategies for when an attacker tries to double-spend by“outpacing” the system. The “large weight” attack means that, in order to double-spend, the attackertries to give a very large weight to the double-spending transaction so that it would outweigh the legitimate subtangle. This strategy would be a menace to the network in the case where the allowed own weight is unbounded. As a solution, we may limit the own weight of a transaction from above, or set it to a constant value. In the situation where the maximal own weight of a transaction is m, the best attack strategy is to generate transactions with own weight m that reference the double-spending transaction. When the input flow of “honest” transactions is large enough compared to the attacker’s computational power, the probability that the double-spending transaction has a larger cumulative weight can be estimated using the formula ). The attack method of building a “parasite chain” makes approval strategiesbased on height or score obsolete since the attacker’s sites will have higher values for these metrics when compared to the legitimate tangle. On the other hand, the MCMC tip selection algorithm described in Section 4.1 seems to provide protection against this kind of attack. The MCMC tip selection algorithm also offers protection against the lazy nodesas a bonus. 6.Resistance to quantum computations It is known that a sufficiently large quantum computer35 could be very efficient for handling problems that rely on trial and error to find a solution. The process of finding a nonce in order to generate a Bitcoin block is a good example of such a problem. As of today, one must check an average of 268 nonces to find a suitable hash that allows a new block to be generated. It is known ) that a quantum computer√
would need Θ( N) operations to solve a problem that is analogous to the Bitcoin puzzle stated above. This same problem would need Θ(N) operations on a classical√ computer. Therefore, a quantum computer would be around 268 = 234 ≈ 17 billion times more efficient at mining the Bitcoin blockchain than a classical computer. Also, it is worth noting that if a blockchain does not increase its difficulty in response to increased hashing power, there would be an increased rate of orphaned blocks. For the same reason, a “large weight” attack would also be much more efficient on a quantum computer. However, capping the weight from above, as suggested in Section 4, would effectively prevent a quantum computer attack as well. This is evident in iota because the number of nonces that one needs to check in order to find a suitable hash for issuing a transaction is not unreasonably large.
submitted by Phtchain to u/Phtchain [link] [comments]

A few thoughts - Wednesday, August 20, 2014

Good evening! A few thoughts for dinner tonight:

The VC bubble and a domino effect of failures

The latest fad in /bitcoin is that the price of bitcoin doesn't matter. One thread was titled "Sean's Outpost can give meals whether bitcoin is worth $35 or $1200!" That's true, but bitcoin is more complicated than that one use case.
I talked about the insane amounts of venture capital being poured into the industry right now a few weeks ago. The price of bitcoin is important for these people because it affects all sorts of businesses. For example, mining businesses can't profit if there is a crash after they design their chips. Exchanges make less money if the price is lower because a lower price can't support high volume. People who build projects on top of the network and who are sitting on donations can fold. Altcoins that are promising can be forked because it is no longer profitable to mine them.
Even at higher prices, there was not enough money to go around to prevent most of the VCs from losing on their investments. The piece of the pie that the VCs can earn shrinks as the price of a bitcoin falls. These people have deep pockets, so most of them can survive a brief downturn, but as the network expands, the lower bound price that would precipitate a complete collapse is rising.
When bitcoins averaged around $50, there weren't lots of employees getting paid in bitcoins and bankers wanting returns on investment. Now that bitcoins tend to average around $600, the size of the economy has increased significantly. Previously, the price of bitcoins could have dropped to $10 and everyone could still have looked forward to a recovery. Now, the minimum price is much higher (I've said $200) where a cascading chain reaction of business failures would take out the whole industry. In the future, that minimum price will likely rise to $1000, and then to $10,000, and so on.
The price falling below $200 or whatever the minimum is doesn't itself signal anything wrong with the promise of bitcoins. Instead, the low price will cause the failure of some critical part of the infrastructure like a major exchange, which could then cause businesses that depend on the exchange to be taken out, and so on. Even if everyone who owns bitcoins believed that the technology would succeed and there were many people spending bitcoins, the businesses would all be forced into bankruptcy simply because other businesses they need to offer their services failed in a chain reaction.
This VC bubble is dangerous and the best thing that could happen right now is for the VCs to stop temporarily with their investments so that this does not happen. Otherwise, the industry will end up in a fragile state where there are startups depending on other startups that have business models depending on a base price.

The "technology adoption curve"

One of the popular graphs making the rounds nowadays is the "technology adoption lifecycle" graph at http://setandbma.files.wordpress.com/2012/05/technology-adoption.png. According to the people who agree with the theory, bitcoin is currently in the bottom of that huge valley in the middle of the chart. One writer suggested that bitcoin was somewhere near the top of the curve still, with a long ways to fall down the valley before the technology ends up as a fraction of what it was thought to become.
While this chart may be relevant to other technologies, it isn't relevant to bitcoins. The most obvious problem with trying to explain bitcoin adoption using this chart is that there have been many bubbles, but the chart only contains one bubble. It would have been possible to pull out this "technology lifecycle" chart after any bubble in the past few years and state that bitcoins were stuck in the "chasm" and will be permanently damaged. For example, someone could have drawn this graph after the period where bitcoins fell from $50 to $2 and stated that the use of the technology will never be valued at more than $20.
Another reason to ignore this chart is that many of the other technologies that are often compared against it didn't follow the chart either. Some people suggest that the Internet followed this chart, because there was a bubble in 2000 that later crashed, and that the high hopes of the Internet transforming daily life never came to be. The way I see it, the Internet has grown far beyond anything imagined in 2000. Whereas pets.com might have failed, Wal-Mart is now losing customers because Internet shopping has become so cheap that it is a bad idea to go to their stores anymore if your goal is to save money. Cell phones have made many people oblivious to the world; I recently compared what it was like to ride the bus when I went to college and what it is now like to ride the same bus system. Now, everyone on the bus is engrossed in their phones and nobody even bothers to look out the windows, and in ten years people will probably be playing video games with their friends in their glasses, oblivious to the world around them.
Bitcoins are also not "just another technology." There are some technologies like bitcoin that can completely change the world. The Internet, television, and radio were a few of them, because the way the world worked was fundamentally changed by these technologies. Most of the other technologies listed on these charts, like facebook, self-driving cars, and virtual reality are not things that fundamentally change the way the world works. The economy ticks on without being changed significantly by facebook, but any company that has no Internet connection is obsolete.

Unbelievable deals on PS4s

Newegg has unbelievable deals on Playstation 4s right now. If you pay with bitcoins, you can buy one for $319.99, almost 30% below market rate. These are brand new and sealed, and it's likely that Sony will not lower prices to this rate for at least a year or 1.5 years. If you want to make quick money, you can buy a PS4 and then undercut sellers on Craigslist to pocket 50 bucks. For some reason, the market of these bitcoin-discounted PS4s and the dollar-denominated PS4s is decoupled and there is significant profit to be made.
I was trying to figure out what the catch is with these consoles yesterday. Newegg must be losing money on these, because they have a limit of 1 per customer. If they were earning money, there would be no reason to have such a limit. If they were simply offering a loss leader to get people to shop at Newegg or to upsell accessories, then they would offer the same price or close to it in dollars.
Therefore, there are three possibilities for this pricing. The first is that Newegg is being killed by so many transaction fees that they can actually afford to discount the PS4s to this insane price rather than give most of their profits to banks. The second is that bitcoins are priced low, and the company is gambling on a 30% loss now to gain more when this panic subsides. The final reason is that they simply want to promote bitcoins as a currency because in the long-run, the money they can save from transaction fees if bitcoin were the world currency far outweighs the losses they are taking on these items now.
I will take a risky position and say that #2 has played a role in this sale. If a company were looking to promote bitcoin adoption, and hold some percentage of assets in bitcoins, then the best time to do it would be when prices are very low. They can sell thousands of these consoles and take $30k in losses, but they end up with 1000 bitcoins in return. In the process, there are a certain number of people who bought accessories that are very profitable, there are new customers who will now return to Newegg in the future, the price of bitcoins is likely to rise eventually, and they have encouraged people who otherwise would never have bought bitcoins to do so, so that the transaction fees they pay in the future are more likely to be lower. Someone in the accounting department was tasked with adding together all these probabilities and came up with how much they can discount the units to make a profit in the long term.

How long a transaction takes

I read an article this morning where someone mentioned that the average time a person needs to wait for a transaction to process is five minutes. The number is derived from the incorrect assumption that blocks occur every ten minutes, so if you picked a random point between these ten minute intervals many times, it would be, on average, five minutes away from the next block.
Some people also state that it takes ten minutes to process a transaction, which, again, is inaccurate. What's alarming is that journalists for big-name newspapers spread false information to the public when they publish articles about the supposed "10-minute confirmation time."
Hashing for the bitcoin network is independent. If people have been hashing for 60 minutes without finding a block, there is no greater chance of finding a block in the next minute than there was 60 minutes ago. Therefore, one cannot say that blocks occur "ten minutes apart," because that isn't always true. Having done lots of work on a block does not mean that the next block is any more likely to be found sooner.
The time until the next block can be calculated using an exponential distribution. If the hashrate has not increased or decreased since the last difficulty change, publishing a transaction right now means you can expect for it to be confirmed in about 6.9 minutes, not 5 or 10 minutes. This calculation does not make intuitive sense, but results from the random nature of independent hashing. This is good news for those who mistakenly believe that it will average 10 minutes for a merchant to receive a transaction.

Other

submitted by quintin3265 to BitcoinThoughts [link] [comments]

What is Hash Rate and Hash Power in Crypto Mining Bitcoin Hash Calculator Bitcoin Calculator  Is Bitcoin Money? Must See Max Keiser Video CryptoCurrency Mining Difficulty Log Jan 21 2020 Hash Rates of Difficulty Bitcoin mining profit calculator

An easy to use crypto-currency finance utility used to calculate a Dash miner's potential profits in DASH and multiple fiat currencies. The calculator fetches price and network data from the internet and only requires the hash rate (speed of mining) from the user. A projected future profit chart is created dynamically and displayed instantly. It takes, on average, Difficulty times 4.2 Billion to mine a block. Look up the current Difficulty value (it changes every couple of weeks). Multiply by 4.2 Billion to get hashes per block. Divide by 25 to get hashes per Bitcoin. Divide that by how many seconds you are willing to wait for your Bitcoin (I recommend one week worth of seconds). I think the answer is about 10 TH/s. The block reward is what miners try to get using their ASICs, which make up the entirety of the Bitcoin network hash rate. ASICs are expensive, and have high electricity costs. Miners are profitable when their hardware and electricity costs to mine one bitcoin are lower than the price of one bitcoin. This means miners can mine bitcoins and sell them for a profit. The more hash power a miner or ... the Global Hash rate. The time for a block from the coins Specifications is used for calculate the profits for the next 24 hours. his is an estimate profit . based on the current Global Hash rate, if the hash rate changes, the profit will change too. Coins with volatile networks are expected to jump up and . down and occasionally show on top of the list. 2. Revenue $ 24h - Shows the profits ... he CoinDesk Bitcoin Calculator converts bitcoin into any world currency using the Bitcoin Price Index, including USD, GBP, EUR, CNY, JPY, and more.

[index] [20929] [40333] [37312] [50476] [2916] [27399] [39271] [15256] [24492] [26660]

What is Hash Rate and Hash Power in Crypto Mining

The hash rate refers to the rate that your mining hardware is performing at as reported by your miner software or from an estimate obtained from the mining hardware comparison tables. The ... bitcoin cloud mining calculator current bitcoin difficulty litecoin mining calculator x11 mining calculator hash rate calculator gpu hashing power calculator # ... Bitcoin Hash Calculator use to calculate the profitability of Bitcoin and the tool to find good return bitcoin miners to buy. You can easily calculate how many Bitcoins mines with your hash rates ... #Mining #Ethereum #Cryptocurrency Welcome to the 11th episode of CCMDL , January 21 2020 We go over talk a little about the difficulty of Ethereum , Bitcoin,... Hash rate is used as the speed indicator of a machine that mines Bitcoin on the Blockchain. The higher the hashrate number on a machine, the faster it will solve complex equations and find blocks ...

#