How to Begin Algorithmic Trading in Python by Ethan Bond ...

Bob The Magic Custodian



Summary: Everyone knows that when you give your assets to someone else, they always keep them safe. If this is true for individuals, it is certainly true for businesses.
Custodians always tell the truth and manage funds properly. They won't have any interest in taking the assets as an exchange operator would. Auditors tell the truth and can't be misled. That's because organizations that are regulated are incapable of lying and don't make mistakes.

First, some background. Here is a summary of how custodians make us more secure:

Previously, we might give Alice our crypto assets to hold. There were risks:

But "no worries", Alice has a custodian named Bob. Bob is dressed in a nice suit. He knows some politicians. And he drives a Porsche. "So you have nothing to worry about!". And look at all the benefits we get:
See - all problems are solved! All we have to worry about now is:
It's pretty simple. Before we had to trust Alice. Now we only have to trust Alice, Bob, and all the ways in which they communicate. Just think of how much more secure we are!

"On top of that", Bob assures us, "we're using a special wallet structure". Bob shows Alice a diagram. "We've broken the balance up and store it in lots of smaller wallets. That way", he assures her, "a thief can't take it all at once". And he points to a historic case where a large sum was taken "because it was stored in a single wallet... how stupid".
"Very early on, we used to have all the crypto in one wallet", he said, "and then one Christmas a hacker came and took it all. We call him the Grinch. Now we individually wrap each crypto and stick it under a binary search tree. The Grinch has never been back since."

"As well", Bob continues, "even if someone were to get in, we've got insurance. It covers all thefts and even coercion, collusion, and misplaced keys - only subject to the policy terms and conditions." And with that, he pulls out a phone-book sized contract and slams it on the desk with a thud. "Yep", he continues, "we're paying top dollar for one of the best policies in the country!"
"Can I read it?' Alice asks. "Sure," Bob says, "just as soon as our legal team is done with it. They're almost through the first chapter." He pauses, then continues. "And can you believe that sales guy Mike? He has the same year Porsche as me. I mean, what are the odds?"

"Do you use multi-sig?", Alice asks. "Absolutely!" Bob replies. "All our engineers are fully trained in multi-sig. Whenever we want to set up a new wallet, we generate 2 separate keys in an air-gapped process and store them in this proprietary system here. Look, it even requires the biometric signature from one of our team members to initiate any withdrawal." He demonstrates by pressing his thumb into the display. "We use a third-party cloud validation API to match the thumbprint and authorize each withdrawal. The keys are also backed up daily to an off-site third-party."
"Wow that's really impressive," Alice says, "but what if we need access for a withdrawal outside of office hours?" "Well that's no issue", Bob says, "just send us an email, call, or text message and we always have someone on staff to help out. Just another part of our strong commitment to all our customers!"

"What about Proof of Reserve?", Alice asks. "Of course", Bob replies, "though rather than publish any blockchain addresses or signed transaction, for privacy we just do a SHA256 refactoring of the inverse hash modulus for each UTXO nonce and combine the smart contract coefficient consensus in our hyperledger lightning node. But it's really simple to use." He pushes a button and a large green checkmark appears on a screen. "See - the algorithm ran through and reserves are proven."
"Wow", Alice says, "you really know your stuff! And that is easy to use! What about fiat balances?" "Yeah, we have an auditor too", Bob replies, "Been using him for a long time so we have quite a strong relationship going! We have special books we give him every year and he's very efficient! Checks the fiat, crypto, and everything all at once!"

"We used to have a nice offline multi-sig setup we've been using without issue for the past 5 years, but I think we'll move all our funds over to your facility," Alice says. "Awesome", Bob replies, "Thanks so much! This is perfect timing too - my Porsche got a dent on it this morning. We have the paperwork right over here." "Great!", Alice replies.
And with that, Alice gets out her pen and Bob gets the contract. "Don't worry", he says, "you can take your crypto-assets back anytime you like - just subject to our cancellation policy. Our annual management fees are also super low and we don't adjust them often".

How many holes have to exist for your funds to get stolen?
Just one.

Why are we taking a powerful offline multi-sig setup, widely used globally in hundreds of different/lacking regulatory environments with 0 breaches to date, and circumventing it by a demonstrably weak third party layer? And paying a great expense to do so?
If you go through the list of breaches in the past 2 years to highly credible organizations, you go through the list of major corporate frauds (only the ones we know about), you go through the list of all the times platforms have lost funds, you go through the list of times and ways that people have lost their crypto from identity theft, hot wallet exploits, extortion, etc... and then you go through this custodian with a fine-tooth comb and truly believe they have value to add far beyond what you could, sticking your funds in a wallet (or set of wallets) they control exclusively is the absolute worst possible way to take advantage of that security.

The best way to add security for crypto-assets is to make a stronger multi-sig. With one custodian, what you are doing is giving them your cryptocurrency and hoping they're honest, competent, and flawlessly secure. It's no different than storing it on a really secure exchange. Maybe the insurance will cover you. Didn't work for Bitpay in 2015. Didn't work for Yapizon in 2017. Insurance has never paid a claim in the entire history of cryptocurrency. But maybe you'll get lucky. Maybe your exact scenario will buck the trend and be what they're willing to cover. After the large deductible and hopefully without a long and expensive court battle.

And you want to advertise this increase in risk, the lapse of judgement, an accident waiting to happen, as though it's some kind of benefit to customers ("Free institutional-grade storage for your digital assets.")? And then some people are writing to the OSC that custodians should be mandatory for all funds on every exchange platform? That this somehow will make Canadians as a whole more secure or better protected compared with standard air-gapped multi-sig? On what planet?

Most of the problems in Canada stemmed from one thing - a lack of transparency. If Canadians had known what a joke Quadriga was - it wouldn't have grown to lose $400m from hard-working Canadians from coast to coast to coast. And Gerald Cotten would be in jail, not wherever he is now (at best, rotting peacefully). EZ-BTC and mister Dave Smilie would have been a tiny little scam to his friends, not a multi-million dollar fraud. Einstein would have got their act together or been shut down BEFORE losing millions and millions more in people's funds generously donated to criminals. MapleChange wouldn't have even been a thing. And maybe we'd know a little more about CoinTradeNewNote - like how much was lost in there. Almost all of the major losses with cryptocurrency exchanges involve deception with unbacked funds.
So it's great to see transparency reports from BitBuy and ShakePay where someone independently verified the backing. The only thing we don't have is:
It's not complicated to validate cryptocurrency assets. They need to exist, they need to be spendable, and they need to cover the total balances. There are plenty of credible people and firms across the country that have the capacity to reasonably perform this validation. Having more frequent checks by different, independent, parties who publish transparent reports is far more valuable than an annual check by a single "more credible/official" party who does the exact same basic checks and may or may not publish anything. Here's an example set of requirements that could be mandated:
There are ways to structure audits such that neither crypto assets nor customer information are ever put at risk, and both can still be properly validated and publicly verifiable. There are also ways to structure audits such that they are completely reasonable for small platforms and don't inhibit innovation in any way. By making the process as reasonable as possible, we can completely eliminate any reason/excuse that an honest platform would have for not being audited. That is arguable far more important than any incremental improvement we might get from mandating "the best of the best" accountants. Right now we have nothing mandated and tons of Canadians using offshore exchanges with no oversight whatsoever.

Transparency does not prove crypto assets are safe. CoinTradeNewNote, Flexcoin ($600k), and Canadian Bitcoins ($100k) are examples where crypto-assets were breached from platforms in Canada. All of them were online wallets and used no multi-sig as far as any records show. This is consistent with what we see globally - air-gapped multi-sig wallets have an impeccable record, while other schemes tend to suffer breach after breach. We don't actually know how much CoinTrader lost because there was no visibility. Rather than publishing details of what happened, the co-founder of CoinTrader silently moved on to found another platform - the "most trusted way to buy and sell crypto" - a site that has no information whatsoever (that I could find) on the storage practices and a FAQ advising that “[t]rading cryptocurrency is completely safe” and that having your own wallet is “entirely up to you! You can certainly keep cryptocurrency, or fiat, or both, on the app.” Doesn't sound like much was learned here, which is really sad to see.
It's not that complicated or unreasonable to set up a proper hardware wallet. Multi-sig can be learned in a single course. Something the equivalent complexity of a driver's license test could prevent all the cold storage exploits we've seen to date - even globally. Platform operators have a key advantage in detecting and preventing fraud - they know their customers far better than any custodian ever would. The best job that custodians can do is to find high integrity individuals and train them to form even better wallet signatories. Rather than mandating that all platforms expose themselves to arbitrary third party risks, regulations should center around ensuring that all signatories are background-checked, properly trained, and using proper procedures. We also need to make sure that signatories are empowered with rights and responsibilities to reject and report fraud. They need to know that they can safely challenge and delay a transaction - even if it turns out they made a mistake. We need to have an environment where mistakes are brought to the surface and dealt with. Not one where firms and people feel the need to hide what happened. In addition to a knowledge-based test, an auditor can privately interview each signatory to make sure they're not in coercive situations, and we should make sure they can freely and anonymously report any issues without threat of retaliation.
A proper multi-sig has each signature held by a separate person and is governed by policies and mutual decisions instead of a hierarchy. It includes at least one redundant signature. For best results, 3of4, 3of5, 3of6, 4of5, 4of6, 4of7, 5of6, or 5of7.

History has demonstrated over and over again the risk of hot wallets even to highly credible organizations. Nonetheless, many platforms have hot wallets for convenience. While such losses are generally compensated by platforms without issue (for example Poloniex, Bitstamp, Bitfinex, Gatecoin, Coincheck, Bithumb, Zaif, CoinBene, Binance, Bitrue, Bitpoint, Upbit, VinDAX, and now KuCoin), the public tends to focus more on cases that didn't end well. Regardless of what systems are employed, there is always some level of risk. For that reason, most members of the public would prefer to see third party insurance.
Rather than trying to convince third party profit-seekers to provide comprehensive insurance and then relying on an expensive and slow legal system to enforce against whatever legal loopholes they manage to find each and every time something goes wrong, insurance could be run through multiple exchange operators and regulators, with the shared interest of having a reputable industry, keeping costs down, and taking care of Canadians. For example, a 4 of 7 multi-sig insurance fund held between 5 independent exchange operators and 2 regulatory bodies. All Canadian exchanges could pay premiums at a set rate based on their needed coverage, with a higher price paid for hot wallet coverage (anything not an air-gapped multi-sig cold wallet). Such a model would be much cheaper to manage, offer better coverage, and be much more reliable to payout when needed. The kind of coverage you could have under this model is unheard of. You could even create something like the CDIC to protect Canadians who get their trading accounts hacked if they can sufficiently prove the loss is legitimate. In cases of fraud, gross negligence, or insolvency, the fund can be used to pay affected users directly (utilizing the last transparent balance report in the worst case), something which private insurance would never touch. While it's recommended to have official policies for coverage, a model where members vote would fully cover edge cases. (Could be similar to the Supreme Court where justices vote based on case law.)
Such a model could fully protect all Canadians across all platforms. You can have a fiat coverage governed by legal agreements, and crypto-asset coverage governed by both multi-sig and legal agreements. It could be practical, affordable, and inclusive.

Now, we are at a crossroads. We can happily give up our freedom, our innovation, and our money. We can pay hefty expenses to auditors, lawyers, and regulators year after year (and make no mistake - this cost will grow to many millions or even billions as the industry grows - and it will be borne by all Canadians on every platform because platforms are not going to eat up these costs at a loss). We can make it nearly impossible for any new platform to enter the marketplace, forcing Canadians to use the same stagnant platforms year after year. We can centralize and consolidate the entire industry into 2 or 3 big players and have everyone else fail (possibly to heavy losses of users of those platforms). And when a flawed security model doesn't work and gets breached, we can make it even more complicated with even more people in suits making big money doing the job that blockchain was supposed to do in the first place. We can build a system which is so intertwined and dependent on big government, traditional finance, and central bankers that it's future depends entirely on that of the fiat system, of fractional banking, and of government bail-outs. If we choose this path, as history has shown us over and over again, we can not go back, save for revolution. Our children and grandchildren will still be paying the consequences of what we decided today.
Or, we can find solutions that work. We can maintain an open and innovative environment while making the adjustments we need to make to fully protect Canadian investors and cryptocurrency users, giving easy and affordable access to cryptocurrency for all Canadians on the platform of their choice, and creating an environment in which entrepreneurs and problem solvers can bring those solutions forward easily. None of the above precludes innovation in any way, or adds any unreasonable cost - and these three policies would demonstrably eliminate or resolve all 109 historic cases as studied here - that's every single case researched so far going back to 2011. It includes every loss that was studied so far not just in Canada but globally as well.
Unfortunately, finding answers is the least challenging part. Far more challenging is to get platform operators and regulators to agree on anything. My last post got no response whatsoever, and while the OSC has told me they're happy for industry feedback, I believe my opinion alone is fairly meaningless. This takes the whole community working together to solve. So please let me know your thoughts. Please take the time to upvote and share this with people. Please - let's get this solved and not leave it up to other people to do.

Facts/background/sources (skip if you like):



Thoughts?
submitted by azoundria2 to QuadrigaInitiative [link] [comments]

Subreddit Stats: programming top posts from 2019-10-22 to 2020-10-21 06:41 PDT

Period: 364.67 days
Submissions Comments
Total 1000 180545
Rate (per day) 2.74 491.84
Unique Redditors 629 34951
Combined Score 1178903 2688497

Top Submitters' Top Submissions

  1. 47468 points, 49 submissions: iamkeyur
    1. One Guy Ruined Hacktoberfest 2020 (3039 points, 584 comments)
    2. AWS forked my project and launched it as its own service (2956 points, 810 comments)
    3. Privacy analysis of Tiktok’s app and website (2858 points, 234 comments)
    4. 98.css – design system for building faithful recreations of Windows 98 UIs (2781 points, 318 comments)
    5. Microsoft demos language model that writes code based on signature and comment (2621 points, 614 comments)
    6. Why does HTML think “chucknorris” is a color? (2565 points, 531 comments)
    7. Windows 95 UI Design (2309 points, 665 comments)
    8. The Linux codebase has over 3k TODO comments, many from over a decade ago (2119 points, 369 comments)
    9. eBay is port scanning visitors to their website (1829 points, 236 comments)
    10. Using const/let instead of var can make JavaScript code run 10× slower in Webkit (1814 points, 525 comments)
  2. 44853 points, 28 submissions: speckz
    1. From August, Chrome will start blocking ads that consume 4MB of network data, 15 seconds of CPU usage in any 30 second period, or 60 seconds of total CPU usage (8434 points, 590 comments)
    2. How To Spot Toxic Software Jobs From Their Descriptions (6246 points, 1281 comments)
    3. A Facebook crawler was making 7M requests per day to my stupid website (2662 points, 426 comments)
    4. Apple, Your Developer Documentation is Garbage (2128 points, 432 comments)
    5. The code I’m still ashamed of (2016) (2105 points, 429 comments)
    6. Slack Is Fumbling Developers And The Rise Of Developer Discords (2095 points, 811 comments)
    7. The Chromium project finds that around 70% of our serious security bugs are memory safety problems. Our next major project is to prevent such bugs at source. (1959 points, 418 comments)
    8. Advice to Myself When Starting Out as a Software Developer (1934 points, 257 comments)
    9. Software patents are another kind of disease (1893 points, 419 comments)
    10. My favourite Git commit (1772 points, 206 comments)
  3. 35237 points, 28 submissions: whackri
    1. It is perfectly OK to only code at work, you can have a life too (6765 points, 756 comments)
    2. Kernighan's Law - Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. (5171 points, 437 comments)
    3. The entire Apollo 11 computer code that helped get us to the Moon is available on github. (3841 points, 433 comments)
    4. Raytracing - in Excel! (2478 points, 168 comments)
    5. Writing userspace USB drivers for abandoned devices (1689 points, 84 comments)
    6. Drum Machine in Excel (1609 points, 60 comments)
    7. fork() can fail: this is important (1591 points, 264 comments)
    8. Learn how computers add numbers and build a 4 bit adder circuit (1548 points, 66 comments)
    9. Heroes Of Might And Magic III engine written from scratch (open source, playable) (1453 points, 84 comments)
    10. Apollo Guidance Computer: Restoring the computer that put man on the Moon (1277 points, 47 comments)
  4. 14588 points, 11 submissions: pimterry
    1. I'm a software engineer going blind, how should I prepare? (4237 points, 351 comments)
    2. The 2038 problem is already affecting some systems (1988 points, 518 comments)
    3. TLDR pages: Simplified, community-driven man pages (1897 points, 182 comments)
    4. JetBrains Mono: A Typeface for Developers (1728 points, 456 comments)
    5. BlurHash: extremely compact representations of image placeholders (930 points, 159 comments)
    6. Let's Destroy C (855 points, 290 comments)
    7. Shared Cache is Going Away (833 points, 192 comments)
    8. XML is almost always misused (766 points, 538 comments)
    9. Wireshark has a new packet diagram view (688 points, 24 comments)
    10. fork() can fail: this is important (460 points, 299 comments)
  5. 14578 points, 9 submissions: magenta_placenta
    1. Trello handed over user's personal account to user's previous company (2962 points, 489 comments)
    2. Feds: IBM did discriminate against older workers in making layoffs - “Analysis shows it was primarily older workers (85.85%) in the total potential pool of those considered for layoff,” the EEOC wrote (2809 points, 509 comments)
    3. Stripe Workers Who Relocate Get $20,000 Bonus and a Pay Cut - Stripe Inc. plans to make a one-time payment of $20,000 to employees who opt to move out of San Francisco, New York or Seattle, but also cut their base salary by as much as 10% (2765 points, 989 comments)
    4. US court fully legalized website scraping and technically prohibited it - On September 9, the U.S. 9th circuit court of Appeals ruled that web scraping public sites does not violate the CFAA (Computer Fraud and Abuse Act) (2014 points, 327 comments)
    5. I Suspect many Task Deadlines are Designed to Force Engineers to Work for Free (1999 points, 553 comments)
    6. Intent to Deprecate and Freeze: The User-Agent string (1012 points, 271 comments)
    7. Contractor admits planting logic bombs in his software to ensure he’d get new work (399 points, 182 comments)
    8. AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning (396 points, 97 comments)
    9. Half of the websites using WebAssembly use it for malicious purposes - WebAssembly not that popular: Only 1,639 sites of the Top 1 Million use WebAssembly (222 points, 133 comments)
  6. 13750 points, 3 submissions: pedrovhb
    1. Bubble sort visualization (7218 points, 276 comments)
    2. Breadth-first search visualization (3874 points, 96 comments)
    3. Selection sort visualization (2658 points, 80 comments)
  7. 11833 points, 1 submission: flaming_bird
    1. 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code (11833 points, 956 comments)
  8. 11208 points, 10 submissions: PowerOfLove1985
    1. No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body (5975 points, 890 comments)
    2. Redesigning uBlock Origin (1184 points, 162 comments)
    3. Playing Around With The Fuchsia Operating System (696 points, 164 comments)
    4. Microsoft's underwater data centre resurfaces after two years (623 points, 199 comments)
    5. Microsoft Paint/Paintbrush in Javascript (490 points, 58 comments)
    6. GitHub shuts off access to Aurelia repository, citing trade sanctions (478 points, 81 comments)
    7. How 3D Game Rendering Works: Texturing (475 points, 22 comments)
    8. Simdjson: Parsing Gigabytes of JSON per Second (441 points, 90 comments)
    9. How 1500 bytes became the MTU of the internet (435 points, 60 comments)
    10. It’s OK for your open source library to be a bit shitty (411 points, 130 comments)
  9. 10635 points, 8 submissions: michalg82
    1. Turning animations to 60fps using AI (3449 points, 234 comments)
    2. Bug #1463112 “Cat sitting on keyboard crashes lightdm” (3150 points, 143 comments)
    3. Heroes Of Might And Magic III engine written from scratch (open source, playable) (1431 points, 172 comments)
    4. Vulkan is coming to Raspberry Pi: first triangle - Raspberry Pi (1318 points, 66 comments)
    5. An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU (431 points, 60 comments)
    6. Microsoft cancels GDC 2020 presence due to coronavirus concerns (Following Sony, Facebook, Kojima Productions, Epic Games, Unity, and more) (371 points, 52 comments)
    7. Moving from reCAPTCHA to hCaptcha - The Cloudflare Blog (278 points, 71 comments)
    8. How much of a genius-level move was using binary space partitioning in Doom? (207 points, 109 comments)
  10. 10106 points, 10 submissions: SerenityOS
    1. Someone suggested I should host my website on my own OS. For that we'll need a web server, so here's me building a basic web server in C++ for SerenityOS! (2269 points, 149 comments)
    2. I've been learning about OS security lately. Here's me making a local root exploit for SerenityOS, and then fixing the kernel bugs that made it possible! (1372 points, 87 comments)
    3. SerenityOS was hacked in a 36c3 CTF! (Exploit and write-up) (1236 points, 40 comments)
    4. One week ago, I started building a JavaScript engine for SerenityOS. Here’s me integrating it with the web browser and adding some simple API’s like alert()! (1169 points, 63 comments)
    5. Implementing macOS-style "purgeable memory" in my kernel. This technique is amazing and helps apps be better memory usage citizens! (1131 points, 113 comments)
    6. SerenityOS: The second year (900 points, 101 comments)
    7. Using my own C++ IDE to make a little program for decorating my webcam frame (571 points, 33 comments)
    8. This morning I ported git to SerenityOS. It took about an hour and some hacks, but it works! :D (547 points, 64 comments)
    9. Smarter C/C++ inlining with attribute((flatten)) (521 points, 118 comments)
    10. Introduction to SerenityOS GUI programming (390 points, 45 comments)

Top Commenters

  1. XANi_ (10753 points, 821 comments)
  2. dnew (7513 points, 641 comments)
  3. drysart (7479 points, 202 comments)
  4. MuonManLaserJab (6666 points, 233 comments)
  5. SanityInAnarchy (6331 points, 350 comments)
  6. AngularBeginner (6215 points, 59 comments)
  7. SerenityOS (5627 points, 128 comments)
  8. chucker23n (5465 points, 370 comments)
  9. IshKebab (4898 points, 393 comments)
  10. L3tum (4857 points, 199 comments)

Top Submissions

  1. 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code by flaming_bird (11833 points, 956 comments)
  2. hentAI: Detecting and removing censors with Deep Learning and Image Segmentation by 7cmStrangler (9621 points, 395 comments)
  3. US Politicians Want to Ban End-to-End Encryption by CarrotRobber (9427 points, 523 comments)
  4. From August, Chrome will start blocking ads that consume 4MB of network data, 15 seconds of CPU usage in any 30 second period, or 60 seconds of total CPU usage by speckz (8434 points, 590 comments)
  5. Mozilla: The Greatest Tech Company Left Behind by matthewpmacdonald (7566 points, 1087 comments)
  6. Bubble sort visualization by pedrovhb (7218 points, 276 comments)
  7. During lockdown my wife has been suffering mentally from pressure to stay at her desk 100% of the time otherwise after a few minutes her laptop locks and she is recorded as inactive. I wrote this small app to help her escape her desk by periodically moving the cursor. Hopefully it can help others. by silitbang6000 (7193 points, 855 comments)
  8. It is perfectly OK to only code at work, you can have a life too by whackri (6765 points, 756 comments)
  9. Blockchain, the amazing solution for almost nothing by imogenchampagne (6725 points, 1561 comments)
  10. Blockchain, the amazing solution for almost nothing by jessefrederik (6524 points, 1572 comments)

Top Comments

  1. 2975 points: deleted's comment in hentAI: Detecting and removing censors with Deep Learning and Image Segmentation
  2. 2772 points: I_DONT_LIE_MUCH's comment in 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code
  3. 2485 points: api's comment in Stripe Workers Who Relocate Get $20,000 Bonus and a Pay Cut - Stripe Inc. plans to make a one-time payment of $20,000 to employees who opt to move out of San Francisco, New York or Seattle, but also cut their base salary by as much as 10%
  4. 2484 points: a_false_vacuum's comment in Stack Overflow lays off 15%
  5. 2464 points: iloveparagon's comment in Google engineer breaks down the problems he uses when doing technical interviews. Lots of advice on algorithms and programming.
  6. 2384 points: why_not_both_bot's comment in During lockdown my wife has been suffering mentally from pressure to stay at her desk 100% of the time otherwise after a few minutes her laptop locks and she is recorded as inactive. I wrote this small app to help her escape her desk by periodically moving the cursor. Hopefully it can help others.
  7. 2293 points: ThatInternetGuy's comment in Iranian Maintainer refuses to merge code from Israeli Developer. Cites Iranian regulations.
  8. 2268 points: xequae's comment in I'm a software engineer going blind, how should I prepare?
  9. 2228 points: turniphat's comment in AWS forked my project and launched it as its own service
  10. 2149 points: Rami-Slicer's comment in 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code
Generated with BBoe's Subreddit Stats
submitted by flpezet to subreddit_stats [link] [comments]

RESEARCH REPORT ABOUT KYBER NETWORK

RESEARCH REPORT ABOUT KYBER NETWORK
Author: Gamals Ahmed, CoinEx Business Ambassador

https://preview.redd.it/9k31yy1bdcg51.jpg?width=936&format=pjpg&auto=webp&s=99bcb7c3f50b272b7d97247b369848b5d8cc6053

ABSTRACT

In this research report, we present a study on Kyber Network. Kyber Network is a decentralized, on-chain liquidity protocol designed to make trading tokens simple, efficient, robust and secure.
Kyber design allows any party to contribute to an aggregated pool of liquidity within each blockchain while providing a single endpoint for takers to execute trades using the best rates available. We envision a connected liquidity network that facilitates seamless, decentralized cross-chain token swaps across Kyber based networks on different chains.
Kyber is a fully on-chain liquidity protocol that enables decentralized exchange of cryptocurrencies in any application. Liquidity providers (Reserves) are integrated into one single endpoint for takers and users. When a user requests a trade, the protocol will scan the entire network to find the reserve with the best price and take liquidity from that particular reserve.

1.INTRODUCTION

DeFi applications all need access to good liquidity sources, which is a critical component to provide good services. Currently, decentralized liquidity is comprised of various sources including DEXes (Uniswap, OasisDEX, Bancor), decentralized funds and other financial apps. The more scattered the sources, the harder it becomes for anyone to either find the best rate for their trade or to even find enough liquidity for their need.
Kyber is a blockchain-based liquidity protocol that aggregates liquidity from a wide range of reserves, powering instant and secure token exchange in any decentralized application.
The protocol allows for a wide range of implementation possibilities for liquidity providers, allowing a wide range of entities to contribute liquidity, including end users, decentralized exchanges and other decentralized protocols. On the taker side, end users, cryptocurrency wallets, and smart contracts are able to perform instant and trustless token trades at the best rates available amongst the sources.
The Kyber Network is project based on the Ethereum protocol that seeks to completely decentralize the exchange of crypto currencies and make exchange trustless by keeping everything on the blockchain.
Through the Kyber Network, users should be able to instantly convert or exchange any crypto currency.

1.1 OVERVIEW ABOUT KYBER NETWORK PROTOCOL

The Kyber Network is a decentralized way to exchange ETH and different ERC20 tokens instantly — no waiting and no registration needed.
Using this protocol, developers can build innovative payment flows and applications, including instant token swap services, ERC20 payments, and financial DApps — helping to build a world where any token is usable anywhere.
Kyber’s fully on-chain design allows for full transparency and verifiability in the matching engine, as well as seamless composability with DApps, not all of which are possible with off-chain or hybrid approaches. The integration of a large variety of liquidity providers also makes Kyber uniquely capable of supporting sophisticated schemes and catering to the needs of DeFi DApps and financial institutions. Hence, many developers leverage Kyber’s liquidity pool to build innovative financial applications, and not surprisingly, Kyber is the most used DeFi protocol in the world.
The Kyber Network is quite an established project that is trying to change the way we think of decentralised crypto currency exchange.
The Kyber Network has seen very rapid development. After being announced in May 2017 the testnet for the Kyber Network went live in August 2017. An ICO followed in September 2017, with the company raising 200,000 ETH valued at $60 million in just one day.
The live main net was released in February 2018 to whitelisted participants, and on March 19, 2018, the Kyber Network opened the main net as a public beta. Since then the network has seen increasing growth, with network volumes growing more than 500% in the first half of 2019.
Although there was a modest decrease in August 2019 that can be attributed to the price of ETH dropping by 50%, impacting the overall total volumes being traded and processed globally.
They are developing a decentralised exchange protocol that will allow developers to build payment flows and financial apps. This is indeed quite a competitive market as a number of other such protocols have been launched.
In Brief - Kyber Network is a tool that allows anyone to swap tokens instantly without having to use exchanges. - It allows vendors to accept different types of cryptocurrency while still being paid in their preferred crypto of choice. - It’s built primarily for Ethereum, but any smart-contract based blockchain can incorporate it.
At its core, Kyber is a decentralized way to exchange ETH and different ERC20 tokens instantly–no waiting and no registration needed. To do this Kyber uses a diverse set of liquidity pools, or pools of different crypto assets called “reserves” that any project can tap into or integrate with.
A typical use case would be if a vendor allowed customers to pay in whatever currency they wish, but receive the payment in their preferred token. Another example would be for Dapp users. At present, if you are not a token holder of a certain Dapp you can’t use it. With Kyber, you could use your existing tokens, instantly swap them for the Dapp specific token and away you go.
All this swapping happens directly on the Ethereum blockchain, meaning every transaction is completely transparent.

1.1.1 WHY BUILD THE KYBER NETWORK?

While crypto currencies were built to be decentralized, many of the exchanges for trading crypto currencies have become centralized affairs. This has led to security vulnerabilities, with many exchanges becoming the victims of hacking and theft.
It has also led to increased fees and costs, and the centralized exchanges often come with slow transfer times as well. In some cases, wallets have been locked and users are unable to withdraw their coins.
Decentralized exchanges have popped up recently to address the flaws in the centralized exchanges, but they have their own flaws, most notably a lack of liquidity, and often times high costs to modify trades in their on-chain order books.

Some of the Integrations with Kyber Protocol
The Kyber Network was formed to provide users with a decentralized exchange that keeps everything right on the blockchain, and uses a reserve system rather than an order book to provide high liquidity at all times. This will allow for the exchange and transfer of any cryptocurrency, even cross exchanges, and costs will be kept at a minimum as well.
The Kyber Network has three guiding design philosophies since the start:
  1. To be most useful the network needs to be platform-agnostic, which allows any protocol or application the ability to take advantage of the liquidity provided by the Kyber Network without any impact on innovation.
  2. The network was designed to make real-world commerce and decentralized financial products not only possible but also feasible. It does this by allowing for instant token exchange across a wide range of tokens, and without any settlement risk.
  3. The Kyber Network was created with ease of integration as a priority, which is why everything runs fully on-chain and fully transparent. Kyber is not only developer-friendly, but is also compatible with a wide variety of systems.

1.1.2 WHO INVENTED KYBER?

Kyber’s founders are Loi Luu, Victor Tran, Yaron Velner — CEO, CTO, and advisor to the Kyber Network.

1.1.3 WHAT DISTINGUISHES KYBER?

Kyber’s mission has always been to integrate with other protocols so they’ve focused on being developer-friendly by providing architecture to allow anyone to incorporate the technology onto any smart-contract powered blockchain. As a result, a variety of different dapps, vendors, and wallets use Kyber’s infrastructure including Set Protocol, bZx, InstaDApp, and Coinbase wallet.
Besides, dapps, vendors, and wallets, Kyber also integrates with other exchanges such as Uniswap — sharing liquidity pools between the two protocols.
A typical use case would be if a vendor allowed customers to pay in whatever currency they wish, but receive the payment in their preferred token. Another example would be for Dapp users. At present, if you are not a token holder of a certain Dapp you can’t use it. With Kyber, you could use your existing tokens, instantly swap them for the Dapp specific token and away you go.
Limit orders on Kyber allow users to set a specific price in which they would like to exchange a token instead of accepting whatever price currently exists at the time of trading. However, unlike with other exchanges, users never lose custody of their crypto assets during limit orders on Kyber.
The Kyber protocol works by using pools of crypto funds called “reserves”, which currently support over 70 different ERC20 tokens. Reserves are essentially smart contracts with a pool of funds. Different parties with different prices and levels of funding control all reserves. Instead of using order books to match buyers and sellers to return the best price, the Kyber protocol looks at all the reserves and returns the best price among the different reserves. Reserves make money on the “spread” or differences between the buying and selling prices. The Kyber wants any token holder to easily convert one token to another with a minimum of fuss.

1.2 KYBER PROTOCOL

The protocol smart contracts offer a single interface for the best available token exchange rates to be taken from an aggregated liquidity pool across diverse sources. ● Aggregated liquidity pool. The protocol aggregates various liquidity sources into one liquidity pool, making it easy for takers to find the best rates offered with one function call. ● Diverse sources of liquidity. The protocol allows different types of liquidity sources to be plugged into. Liquidity providers may employ different strategies and different implementations to contribute liquidity to the protocol. ● Permissionless. The protocol is designed to be permissionless where any developer can set up various types of reserves, and any end user can contribute liquidity. Implementations need to take into consideration various security vectors, such as reserve spamming, but can be mitigated through a staking mechanism. We can expect implementations to be permissioned initially until the maintainers are confident about these considerations.
The core feature that the Kyber protocol facilitates is the token swap between taker and liquidity sources. The protocol aims to provide the following properties for token trades: ● Instant Settlement. Takers do not have to wait for their orders to be fulfilled, since trade matching and settlement occurs in a single blockchain transaction. This enables trades to be part of a series of actions happening in a single smart contract function. ● Atomicity. When takers make a trade request, their trade either gets fully executed, or is reverted. This “all or nothing” aspect means that takers are not exposed to the risk of partial trade execution. ● Public rate verification. Anyone can verify the rates that are being offered by reserves and have their trades instantly settled just by querying from the smart contracts. ● Ease of integration. Trustless and atomic token trades can be directly and easily integrated into other smart contracts, thereby enabling multiple trades to be performed in a smart contract function.
How each actor works is specified in Section Network Actors. 1. Takers refer to anyone who can directly call the smart contract functions to trade tokens, such as end-users, DApps, and wallets. 2. Reserves refer to anyone who wishes to provide liquidity. They have to implement the smart contract functions defined in the reserve interface in order to be registered and have their token pairs listed. 3. Registered reserves refer to those that will be cycled through for matching taker requests. 4. Maintainers refer to anyone who has permission to access the functions for the adding/removing of reserves and token pairs, such as a DAO or the team behind the protocol implementation. 5. In all, they comprise of the network, which refers to all the actors involved in any given implementation of the protocol.
The protocol implementation needs to have the following: 1. Functions for takers to check rates and execute the trades 2. Functions for the maintainers to registeremove reserves and token pairs 3. Reserve interface that defines the functions reserves needs to implement
https://preview.redd.it/d2tcxc7wdcg51.png?width=700&format=png&auto=webp&s=b2afde388a77054e6731772b9115ee53f09b6a4a

1.3 KYBER CORE SMART CONTRACTS

Kyber Core smart contracts is an implementation of the protocol that has major protocol functions to allow actors to join and interact with the network. For example, the Kyber Core smart contracts provide functions for the listing and delisting of reserves and trading pairs by having clear interfaces for the reserves to comply to be able to register to the network and adding support for new trading pairs. In addition, the Kyber Core smart contracts also provide a function for takers to query the best rate among all the registered reserves, and perform the trades with the corresponding rate and reserve. A trading pair consists of a quote token and any other token that the reserve wishes to support. The quote token is the token that is either traded from or to for all trades. For example, the Ethereum implementation of the Kyber protocol uses Ether as the quote token.
In order to search for the best rate, all reserves supporting the requested token pair will be iterated through. Hence, the Kyber Core smart contracts need to have this search algorithm implemented.
The key functions implemented in the Kyber Core Smart Contracts are listed in Figure 2 below. We will visit and explain the implementation details and security considerations of each function in the Specification Section.

1.4 HOW KYBER’S ON-CHAIN PROTOCOL WORKS?

Kyber is the liquidity infrastructure for decentralized finance. Kyber aggregates liquidity from diverse sources into a pool, which provides the best rates for takers such as DApps, Wallets, DEXs, and End users.

1.4.1 PROVIDING LIQUIDITY AS A RESERVE

Anyone can operate a Kyber Reserve to market make for profit and make their tokens available for DApps in the ecosystem. Through an open reserve architecture, individuals, token teams and professional market makers can contribute token assets to Kyber’s liquidity pool and earn from the spread in every trade. These tokens become available at the best rates across DApps that tap into the network, making them instantly more liquid and useful.
MAIN RESERVE TYPES Kyber currently has over 45 reserves in its network providing liquidity. There are 3 main types of reserves that allow different liquidity contribution options to suit the unique needs of different providers. 1. Automated Price Reserves (APR) — Allows token teams and users with large token holdings to have an automated yet customized pricing system with low maintenance costs. Synthetix and Melon are examples of teams that run APRs. 2. Fed Price Reserves (FPR) — Operated by professional market makers that require custom and advanced pricing strategies tailored to their specific needs. Kyber alongside reserves such as OneBit, runs FPRs. 3. Bridge Reserves (BR) — These are specialized reserves meant to bring liquidity from other on-chain liquidity providers like Uniswap, Oasis, DutchX, and Bancor into the network.

1.5 KYBER NETWORK ROLES

There Kyber Network functions through coordination between several different roles and functions as explained below: - Users — This entity uses the Kyber Network to send and receive tokens. A user can be an individual, a merchant, and even a smart contract account. - Reserve Entities — This role is used to add liquidity to the platform through the dynamic reserve pool. Some reserve entities are internal to the Kyber Network, but others may be registered third parties. Reserve entities may be public if the public contributes to the reserves they hold, otherwise they are considered private. By allowing third parties as reserve entities the network adds diversity, which prevents monopolization and keeps exchange rates competitive. Allowing third party reserve entities also allows for the listing of less popular coins with lower volumes. - Reserve Contributors — Where reserve entities are classified as public, the reserve contributor is the entity providing reserve funds. Their incentive for doing so is a profit share from the reserve. - The Reserve Manager — Maintains the reserve, calculates exchange rates and enters them into the network. The reserve manager profits from exchange spreads set by them on their reserves. They can also benefit from increasing volume by accessing the entire Kyber Network. - The Kyber Network Operator — Currently the Kyber Network team is filling the role of the network operator, which has a function to adds/remove Reserve Entities as well as controlling the listing of tokens. Eventually, this role will revert to a proper decentralized governance.

1.6 BASIC TOKEN TRADE

A basic token trade is one that has the quote token as either the source or destination token of the trade request. The execution flow of a basic token trade is depicted in the diagram below, where a taker would like to exchange BAT tokens for ETH as an example. The trade happens in a single blockchain transaction. 1. Taker sends 1 ETH to the protocol contract, and would like to receive BAT in return. 2. Protocol contract queries the first reserve for its ETH to BAT exchange rate. 3. Reserve 1 offers an exchange rate of 1 ETH for 800 BAT. 4. Protocol contract queries the second reserve for its ETH to BAT exchange rate. 5. Reserve 2 offers an exchange rate of 1 ETH for 820 BAT. 6. This process goes on for the other reserves. After the iteration, reserve 2 is discovered to have offered the best ETH to BAT exchange rate. 7. Protocol contract sends 1 ETH to reserve 2. 8. The reserve sends 820 BAT to the taker.

1.7 TOKEN-TO-TOKEN TRADE

A token-to-token trade is one where the quote token is neither the source nor the destination token of the trade request. The exchange flow of a token to token trade is depicted in the diagram below, where a taker would like to exchange BAT tokens for DAI as an example. The trade happens in a single blockchain transaction. 1. Taker sends 50 BAT to the protocol contract, and would like to receive DAI in return. 2. Protocol contract sends 50 BAT to the reserve offering the best BAT to ETH rate. 3. Protocol contract receives 1 ETH in return. 4. Protocol contract sends 1 ETH to the reserve offering the best ETH to DAI rate. 5. Protocol contract receives 30 DAI in return. 6. Protocol contract sends 30 DAI to the user.

2.KYBER NETWORK CRYSTAL (KNC) TOKEN

Kyber Network Crystal (KNC) is an ERC-20 utility token and an integral part of Kyber Network.
KNC is the first deflationary staking token where staking rewards and token burns are generated from actual network usage and growth in DeFi.
The Kyber Network Crystal (KNC) is the backbone of the Kyber Network. It works to connect liquidity providers and those who need liquidity and serves three distinct purposes. The first of these is to collect transaction fees, and a portion of every fee collected is burned, which keeps KNC deflationary. Kyber Network Crystals (KNC), are named after the crystals in Star Wars used to power light sabers.
The KNC also ensures the smooth operation of the reserve system in the Kyber liquidity since entities must use third-party tokens to buy the KNC that pays for their operations in the network.
KNC allows token holders to play a critical role in determining the incentive system, building a wide base of stakeholders, and facilitating economic flow in the network. A small fee is charged each time a token exchange happens on the network, and KNC holders get to vote on this fee model and distribution, as well as other important decisions. Over time, as more trades are executed, additional fees will be generated for staking rewards and reserve rebates, while more KNC will be burned. - Participation rewards — KNC holders can stake KNC in the KyberDAO and vote on key parameters. Voters will earn staking rewards (in ETH) - Burning — Some of the network fees will be burned to reduce KNC supply permanently, providing long-term value accrual from decreasing supply. - Reserve incentives — KNC holders determine the portion of network fees that are used as rebates for selected liquidity providers (reserves) based on their volume performance.

Finally, the KNC token is the connection between the Kyber Network and the exchanges, wallets, and dApps that leverage the liquidity network. This is a virtuous system since entities are rewarded with referral fees for directing more users to the Kyber Network, which helps increase adoption for Kyber and for the entities using the Network.
And of course there will soon be a fourth and fifth uses for the KNC, which will be as a staking token used to generate passive income, as well as a governance token used to vote on key parameters of the network.
The Kyber Network Crystal (KNC) was released in a September 2017 ICO at a price around $1. There were 226,000,000 KNC minted for the ICO, with 61% sold to the public. The remaining 39% are controlled 50/50 by the company and the founders/advisors, with a 1 year lockup period and 2 year vesting period.
Currently, just over 180 million coins are in circulation, and the total supply has been reduced to 210.94 million after the company burned 1 millionth KNC token in May 2019 and then its second millionth KNC token just three months later.
That means that while it took 15 months to burn the first million KNC, it took just 10 weeks to burn the second million KNC. That shows how rapidly adoption has been growing recently for Kyber, with July 2019 USD trading volumes on the Kyber Network nearly reaching $60 million. This volume has continued growing, and on march 13, 2020 the network experienced its highest daily trading activity of $33.7 million in a 24-hour period.
Currently KNC is required by Reserve Managers to operate on the network, which ensures a minimum amount of demand for the token. Combined with future plans for burning coins, price is expected to maintain an upward bias, although it has suffered along with the broader market in 2018 and more recently during the summer of 2019.
It was unfortunate in 2020 that a beginning rally was cut short by the coronavirus pandemic, although the token has stabilized as of April 2020, and there are hopes the rally could resume in the summer of 2020.

2.1 HOW ARE KNC TOKENS PRODUCED?

The native token of Kyber is called Kyber Network Crystals (KNC). All reserves are required to pay fees in KNC for the right to manage reserves. The KNC collected as fees are either burned and taken out of the total supply or awarded to integrated dapps as an incentive to help them grow.

2.2 HOW DO YOU GET HOLD OF KNC TOKENS?

Kyber Swap can be used to buy ETH directly using a credit card, which can then be used to swap for KNC. Besides Kyber itself, exchanges such as Binance, Huobi, and OKex trade KNC.

2.3 WHAT CAN YOU DO WITH KYBER?

The most direct and basic function of Kyber is for instantly swapping tokens without registering an account, which anyone can do using an Etheruem wallet such as MetaMask. Users can also create their own reserves and contribute funds to a reserve, but that process is still fairly technical one–something Kyber is working on making easier for users in the future.

2.4 THE GOAL OF KYBER THE FUTURE

The goal of Kyber in the coming years is to solidify its position as a one-stop solution for powering liquidity and token swapping on Ethereum. Kyber plans on a major protocol upgrade called Katalyst, which will create new incentives and growth opportunities for all stakeholders in their ecosystem, especially KNC holders. The upgrade will mean more use cases for KNC including to use KNC to vote on governance decisions through a decentralized organization (DAO) called the KyberDAO.
With our upcoming Katalyst protocol upgrade and new KNC model, Kyber will provide even more benefits for stakeholders. For instance, reserves will no longer need to hold a KNC balance for fees, removing a major friction point, and there will be rebates for top performing reserves. KNC holders can also stake their KNC to participate in governance and receive rewards.

2.5 BUYING & STORING KNC

Those interested in buying KNC tokens can do so at a number of exchanges. Perhaps your best bet between the complete list is the likes of Coinbase Pro and Binance. The former is based in the USA whereas the latter is an offshore exchange.
The trading volume is well spread out at these exchanges, which means that the liquidity is not concentrated and dependent on any one exchange. You also have decent liquidity on each of the exchange books. For example, the Binance BTC / KNC books are wide and there is decent turnover. This means easier order execution.
KNC is an ERC20 token and can be stored in any wallet with ERC20 support, such as MyEtherWallet or MetaMask. One interesting alternative is the KyberSwap Android mobile app that was released in August 2019.
It allows for instant swapping of tokens and has support for over 70 different altcoins. It also allows users to set price alerts and limit orders and works as a full-featured Ethereum wallet.

2.6 KYBER KATALYST UPGRADE

Kyber has announced their intention to become the de facto liquidity layer for the Decentralized Finance space, aiming to have Kyber as the single on-chain endpoint used by the majority of liquidity providers and dApp developers. In order to achieve this goal the Kyber Network team is looking to create an open ecosystem that garners trust from the decentralized finance space. They believe this is the path that will lead the majority of projects, developers, and users to choose Kyber for liquidity needs. With that in mind they have recently announced the launch of a protocol upgrade to Kyber which is being called Katalyst.
The Katalyst upgrade will create a stronger ecosystem by creating strong alignments towards a common goal, while also strengthening the incentives for stakeholders to participate in the ecosystem.
The primary beneficiaries of the Katalyst upgrade will be the three major Kyber stakeholders: 1. Reserve managers who provide network liquidity; 2. dApps that connect takers to Kyber; 3. KNC holders.
These stakeholders can expect to see benefits as highlighted below: Reserve Managers will see two new benefits to providing liquidity for the network. The first of these benefits will be incentives for providing reserves. Once Katalyst is implemented part of the fees collected will go to the reserve managers as an incentive for providing liquidity.
This mechanism is similar to rebates in traditional finance, and is expected to drive the creation of additional reserves and market making, which in turn will lead to greater liquidity and platform reach.
Katalyst will also do away with the need for reserve managers to maintain a KNC balance for use as network fees. Instead fees will be automatically collected and used as incentives or burned as appropriate. This should remove a great deal of friction for reserves to connect with Kyber without affecting the competitive exchange rates that takers in the system enjoy. dApp Integrators will now be able to set their own spread, which will give them full control over their own business model. This means the current fee sharing program that shares 30% of the 0.25% fee with dApp developers will go away and developers will determine their own spread. It’s believed this will increase dApp development within Kyber as developers will now be in control of fees.
KNC Holders, often thought of as the core of the Kyber Network, will be able to take advantage of a new staking mechanism that will allow them to receive a portion of network fees by staking their KNC and participating in the KyberDAO.

2.7 COMING KYBERDAO

With the implementation of the Katalyst protocol the KNC holders will be put right at the heart of Kyber. Holders of KNC tokens will now have a critical role to play in determining the future economic flow of the network, including its incentive systems.
The primary way this will be achieved is through KyberDAO, a way in which on-chain and off-chain governance will align to streamline cooperation between the Kyber team, KNC holders, and market participants.
The Kyber Network team has identified 3 key areas of consideration for the KyberDAO: 1. Broad representation, transparent governance and network stability 2. Strong incentives for KNC holders to maintain their stake and be highly involved in governance 3. Maximizing participation with a wide range of options for voting delegation
Interaction between KNC Holders & Kyber
This means KNC holders have been empowered to determine the network fee and how to allocate the fees to ensure maximum network growth. KNC holders will now have three fee allocation options to vote on: - Voting Rewards: Immediate value creation. Holders who stake and participate in the KyberDAO get their share of the fees designated for rewards. - Burning: Long term value accrual. The decreasing supply of KNC will improve the token appreciation over time and benefit those who did not participate. - Reserve Incentives:Value creation via network growth. By rewarding Kyber reserve managers based on their performance, it helps to drive greater volume, value, and network fees.

2.8 TRANSPARENCY AND STABILITY

The design of the KyberDAO is meant to allow for the greatest network stability, as well as maximum transparency and the ability to quickly recover in emergency situations. Initally the Kyber team will remain as maintainers of the KyberDAO. The system is being developed to be as verifiable as possible, while still maintaining maximum transparency regarding the role of the maintainer in the DAO.
Part of this transparency means that all data and processes are stored on-chain if feasible. Voting regarding network fees and allocations will be done on-chain and will be immutable. In situations where on-chain storage or execution is not feasible there will be a set of off-chain governance processes developed to ensure all decisions are followed through on.

2.9 KNC STAKING AND DELEGATION

Staking will be a new addition and both staking and voting will be done in fixed periods of times called “epochs”. These epochs will be measured in Ethereum block times, and each KyberDAO epoch will last roughly 2 weeks.
This is a relatively rapid epoch and it is beneficial in that it gives more rapid DAO conclusion and decision-making, while also conferring faster reward distribution. On the downside it means there needs to be a new voting campaign every two weeks, which requires more frequent participation from KNC stakeholders, as well as more work from the Kyber team.
Delegation will be part of the protocol, allowing stakers to delegate their voting rights to third-party pools or other entities. The pools receiving the delegation rights will be free to determine their own fee structure and voting decisions. Because the pools will share in rewards, and because their voting decisions will be clearly visible on-chain, it is expected that they will continue to work to the benefit of the network.

3. TRADING

After the September 2017 ICO, KNC settled into a trading price that hovered around $1.00 (decreasing in BTC value) until December. The token has followed the trend of most other altcoins — rising in price through December and sharply declining toward the beginning of January 2018.
The KNC price fell throughout all of 2018 with one exception during April. From April 6th to April 28th, the price rose over 200 percent. This run-up coincided with a blog post outlining plans to bring Bitcoin to the Ethereum blockchain. Since then, however, the price has steadily fallen, currently resting on what looks like a $0.15 (~0.000045 BTC) floor.
With the number of partners using the Kyber Network, the price may rise as they begin to fully use the network. The development team has consistently hit the milestones they’ve set out to achieve, so make note of any release announcements on the horizon.

4. COMPETITION

The 0x project is the biggest competitor to Kyber Network. Both teams are attempting to enter the decentralized exchange market. The primary difference between the two is that Kyber performs the entire exchange process on-chain while 0x keeps the order book and matching off-chain.
As a crypto swap exchange, the platform also competes with ShapeShift and Changelly.

5.KYBER MILESTONES

• June 2020: Digifox, an all-in-one finance application by popular crypto trader and Youtuber Nicholas Merten a.k.a DataDash (340K subs), integrated Kyber to enable users to easily swap between cryptocurrencies without having to leave the application. • June 2020: Stake Capital partnered with Kyber to provide convenient KNC staking and delegation services, and also took a KNC position to participate in governance. • June 2020: Outlined the benefits of the Fed Price Reserve (FPR) for professional market makers and advanced developers. • May 2020: Kyber crossed US$1 Billion in total trading volume and 1 Million transactions, performed entirely on-chain on Ethereum. • May 2020: StakeWith.Us partnered Kyber Network as a KyberDAO Pool Master. • May 2020: 2Key, a popular blockchain referral solution using smart links, integrated Kyber’s on-chain liquidity protocol for seamless token swaps • May 2020: Blockchain game League of Kingdoms integrated Kyber to accept Token Payments for Land NFTs. • May 2020: Joined the Zcash Developer Alliance , an invite-only working group to advance Zcash development and interoperability. • May 2020: Joined the Chicago DeFi Alliance to help accelerate on-chain market making for professionals and developers. • March 2020: Set a new record of USD $33.7M in 24H fully on-chain trading volume, and $190M in 30 day on-chain trading volume. • March 2020: Integrated by Rarible, Bullionix, and Unstoppable Domains, with the KyberWidget deployed on IPFS, which allows anyone to swap tokens through Kyber without being blocked. • February 2020: Popular Ethereum blockchain game Axie Infinity integrated Kyber to accept ERC20 payments for NFT game items. • February 2020: Kyber’s protocol was integrated by Gelato Finance, Idle Finance, rTrees, Sablier, and 0x API for their liquidity needs. • January 2020: Kyber Network was found to be the most used protocol in the whole decentralized finance (DeFi) space in 2019, according to a DeFi research report by Binance. • December 2019: Switcheo integrated Kyber’s protocol for enhanced liquidity on their own DEX. • December 2019: DeFi Wallet Eidoo integrated Kyber for seamless in-wallet token swaps. • December 2019: Announced the development of the Katalyst Protocol Upgrade and new KNC token model. • July 2019: Developed the Waterloo Bridge , a Decentralized Practical Cross-chain Bridge between EOS and Ethereum, successfully demonstrating a token swap between Ethereum to EOS. • July 2019: Trust Wallet, the official Binance wallet, integrated Kyber as part of its decentralized token exchange service, allowing even more seamless in-wallet token swaps for thousands of users around the world. • May 2019: HTC, the large consumer electronics company with more than 20 years of innovation, integrated Kyber into its Zion Vault Wallet on EXODUS 1 , the first native web 3.0 blockchain phone, allowing users to easily swap between cryptocurrencies in a decentralized manner without leaving the wallet. • January 2019: Introduced the Automated Price Reserve (APR) , a capital efficient way for token teams and individuals to market make with low slippage. • January 2019: The popular Enjin Wallet, a default blockchain DApp on the Samsung S10 and S20 mobile phones, integrated Kyber to enable in-wallet token swaps. • October 2018: Kyber was a founding member of the WBTC (Wrapped Bitcoin) Initiative and DAO. • October 2018: Developed the KyberWidget for ERC20 token swaps on any website, with CoinGecko being the first major project to use it on their popular site.

Full Article

submitted by CoinEx_Institution to kybernetwork [link] [comments]

Dive Into Tendermint Consensus Protocol (I)

Dive Into Tendermint Consensus Protocol (I)
This article is written by the CoinEx Chain lab. CoinEx Chain is the world’s first public chain exclusively designed for DEX, and will also include a Smart Chain supporting smart contracts and a Privacy Chain protecting users’ privacy.
longcpp @ 20200618
This is Part 1 of the serialized articles aimed to explain the Tendermint consensus protocol in detail.
Part 1. Preliminary of the consensus protocol: security model and PBFT protocol
Part 2. Tendermint consensus protocol illustrated: two-phase voting protocol and the locking and unlocking mechanism
Part 3. Weighted round-robin proposer selection algorithm used in Tendermint project
Any consensus agreement that is ultimately reached is the General Agreement, that is, the majority opinion. The consensus protocol on which the blockchain system operates is no exception. As a distributed system, the blockchain system aims to maintain the validity of the system. Intuitively, the validity of the blockchain system has two meanings: firstly, there is no ambiguity, and secondly, it can process requests to update its status. The former corresponds to the safety requirements of distributed systems, while the latter to the requirements of liveness. The validity of distributed systems is mainly maintained by consensus protocols, considering the multiple nodes and network communication involved in such systems may be unstable, which has brought huge challenges to the design of consensus protocols.

The semi-synchronous network model and Byzantine fault tolerance

Researchers of distributed systems characterize these problems that may occur in nodes and network communications using node failure models and network models. The fail-stop failure in node failure models refers to the situation where the node itself stops running due to configuration errors or other reasons, thus unable to go on with the consensus protocol. This type of failure will not cause side effects on other parts of the distributed system except that the node itself stops running. However, for such distributed systems as the public blockchain, when designing a consensus protocol, we still need to consider the evildoing intended by nodes besides their failure. These incidents are all included in the Byzantine Failure model, which covers all unexpected situations that may occur on the node, for example, passive downtime failures and any deviation intended by the nodes from the consensus protocol. For a better explanation, downtime failures refer to nodes’ passive running halt, and the Byzantine failure to any arbitrary deviation of nodes from the consensus protocol.
Compared with the node failure model which can be roughly divided into the passive and active models, the modeling of network communication is more difficult. The network itself suffers problems of instability and communication delay. Moreover, since all network communication is ultimately completed by the node which may have a downtime failure or a Byzantine failure in itself, it is usually difficult to define whether such failure arises from the node or the network itself when a node does not receive another node's network message. Although the network communication may be affected by many factors, the researchers found that the network model can be classified by the communication delay. For example, the node may fail to send data packages due to the fail-stop failure, and as a result, the corresponding communication delay is unknown and can be any value. According to the concept of communication delay, the network communication model can be divided into the following three categories:
  • The synchronous network model: There is a fixed, known upper bound of delay $\Delta$ in network communication. Under this model, the maximum delay of network communication between two nodes in the network is $\Delta$. Even if there is a malicious node, the communication delay arising therefrom does not exceed $\Delta$.
  • The asynchronous network model: There is an unknown delay in network communication, with the upper bound of the delay known, but the message can still be successfully delivered in the end. Under this model, the network communication delay between two nodes in the network can be any possible value, that is, a malicious node, if any, can arbitrarily extend the communication delay.
  • The semi-synchronous network model: Assume that there is a Global Stabilization Time (GST), before which it is an asynchronous network model and after which, a synchronous network model. In other words, there is a fixed, known upper bound of delay in network communication $\Delta$. A malicious node can delay the GST arbitrarily, and there will be no notification when no GST occurs. Under this model, the delay in the delivery of the message at the time $T$ is $\Delta + max(T, GST)$.
The synchronous network model is the most ideal network environment. Every message sent through the network can be received within a predictable time, but this model cannot reflect the real network communication situation. As in a real network, network failures are inevitable from time to time, causing the failure in the assumption of the synchronous network model. Yet the asynchronous network model goes to the other extreme and cannot reflect the real network situation either. Moreover, according to the FLP (Fischer-Lynch-Paterson) theorem, under this model if there is one node fails, no consensus protocol will reach consensus in a limited time. In contrast, the semi-synchronous network model can better describe the real-world network communication situation: network communication is usually synchronous or may return to normal after a short time. Such an experience must be no stranger to everyone: the web page, which usually gets loaded quite fast, opens slowly every now and then, and you need to try before you know the network is back to normal since there is usually no notification. The peer-to-peer (P2P) network communication, which is widely used in blockchain projects, also makes it possible for a node to send and receive information from multiple network channels. It is unrealistic to keep blocking the network information transmission of a node for a long time. Therefore, all the discussion below is under the semi-synchronous network model.
The design and selection of consensus protocols for public chain networks that allow nodes to dynamically join and leave need to consider possible Byzantine failures. Therefore, the consensus protocol of a public chain network is designed to guarantee the security and liveness of the network under the semi-synchronous network model on the premise of possible Byzantine failure. Researchers of distributed systems point out that to ensure the security and liveness of the system, the consensus protocol itself needs to meet three requirements:
  • Validity: The value reached by honest nodes must be the value proposed by one of them
  • Agreement: All honest nodes must reach consensus on the same value
  • Termination: The honest nodes must eventually reach consensus on a certain value
Validity and agreement can guarantee the security of the distributed system, that is, the honest nodes will never reach a consensus on a random value, and once the consensus is reached, all honest nodes agree on this value. Termination guarantees the liveness of distributed systems. A distributed system unable to reach consensus is useless.

The CAP theorem and Byzantine Generals Problem

In a semi-synchronous network, is it possible to design a Byzantine fault-tolerant consensus protocol that satisfies validity, agreement, and termination? How many Byzantine nodes can a system tolerance? The CAP theorem and Byzantine Generals Problem provide an answer for these two questions and have thus become the basic guidelines for the design of Byzantine fault-tolerant consensus protocols.
Lamport, Shostak, and Pease abstracted the design of the consensus mechanism in the distributed system in 1982 as the Byzantine Generals Problem, which refers to such a situation as described below: several generals each lead the army to fight in the war, and their troops are stationed in different places. The generals must formulate a unified action plan for the victory. However, since the camps are far away from each other, they can only communicate with each other through the communication soldiers, or, in other words, they cannot appear on the same occasion at the same time to reach a consensus. Unfortunately, among the generals, there is a traitor or two who intend to undermine the unified actions of the loyal generals by sending the wrong information, and the communication soldiers cannot send the message to the destination by themselves. It is assumed that each communication soldier can prove the information he has brought comes from a certain general, just as in the case of a real BFT consensus protocol, each node has its public and private keys to establish an encrypted communication channel for each other to ensure that its messages will not be tampered with in the network communication, and the message receiver can also verify the sender of the message based thereon. As already mentioned, any consensus agreement ultimately reached represents the consensus of the majority. In the process of generals communicating with each other for an offensive or retreat, a general also makes decisions based on the majority opinion from the information collected by himself.
According to the research of Lamport et al, if there are 1/3 or more traitors in the node, the generals cannot reach a unified decision. For example, in the following figure, assume there are 3 generals and only 1 traitor. In the figure on the left, suppose that General C is the traitor, and A and B are loyal. If A wants to launch an attack and informs B and C of such intention, yet the traitor C sends a message to B, suggesting what he has received from A is a retreat. In this case, B can't decide as he doesn't know who the traitor is, and the information received is insufficient for him to decide. If A is a traitor, he can send different messages to B and C. Then C faithfully reports to B the information he received. At this moment as B receives conflicting information, he cannot make any decisions. In both cases, even if B had received consistent information, it would be impossible for him to spot the traitor between A and C. Therefore, it is obvious that in both situations shown in the figure below, the honest General B cannot make a choice.
According to this conclusion, when there are $n$ generals with at most $f$ traitors (n≤3f), the generals cannot reach a consensus if $n \leq 3f$; and with $n > 3f$, a consensus can be reached. This conclusion also suggests that when the number of Byzantine failures $f$ exceeds 1/3 of the total number of nodes $n$ in the system $f \ge n/3$ , no consensus will be reached on any consensus protocol among all honest nodes. Only when $f < n/3$, such condition is likely to happen, without loss of generality, and for the subsequent discussion on the consensus protocol, $ n \ge 3f + 1$ by default.
The conclusion reached by Lamport et al. on the Byzantine Generals Problem draws a line between the possible and the impossible in the design of the Byzantine fault tolerance consensus protocol. Within the possible range, how will the consensus protocol be designed? Can both the security and liveness of distributed systems be fully guaranteed? Brewer provided the answer in his CAP theorem in 2000. It indicated that a distributed system requires the following three basic attributes, but any distributed system can only meet two of the three at the same time.
  1. Consistency: When any node responds to the request, it must either provide the latest status information or provide no status information
  2. Availability: Any node in the system must be able to continue reading and writing
  3. Partition Tolerance: The system can tolerate the loss of any number of messages between two nodes and still function normally

https://preview.redd.it/1ozfwk7u7m851.png?width=1400&format=png&auto=webp&s=fdee6318de2cf1c021e636654766a7a0fe7b38b4
A distributed system aims to provide consistent services. Therefore, the consistency attribute requires that the two nodes in the system cannot provide conflicting status information or expired information, which can ensure the security of the distributed system. The availability attribute is to ensure that the system can continuously update its status and guarantee the availability of distributed systems. The partition tolerance attribute is related to the network communication delay, and, under the semi-synchronous network model, it can be the status before GST when the network is in an asynchronous status with an unknown delay in the network communication. In this condition, communicating nodes may not receive information from each other, and the network is thus considered to be in a partitioned status. Partition tolerance requires the distributed system to function normally even in network partitions.
The proof of the CAP theorem can be demonstrated with the following diagram. The curve represents the network partition, and each network has four nodes, distinguished by the numbers 1, 2, 3, and 4. The distributed system stores color information, and all the status information stored by all nodes is blue at first.
  1. Partition tolerance and availability mean the loss of consistency: When node 1 receives a new request in the leftmost image, the status changes to red, the status transition information of node 1 is passed to node 3, and node 3 also updates the status information to red. However, since node 3 and node 4 did not receive the corresponding information due to the network partition, the status information is still blue. At this moment, if the status information is queried through node 2, the blue returned by node 2 is not the latest status of the system, thus losing consistency.
  2. Partition tolerance and consistency mean the loss of availability: In the middle figure, the initial status information of all nodes is blue. When node 1 and node 3 update the status information to red, node 2 and node 4 maintain the outdated information as blue due to network partition. Also when querying status information through node 2, you need to first ask other nodes to make sure you’re in the latest status before returning status information as node 2 needs to follow consistency, but because of the network partition, node 2 cannot receive any information from node 1 or node 3. Then node 2 cannot determine whether it is in the latest status, so it chooses not to return any information, thus depriving the system of availability.
  3. Consistency and availability mean the loss of the partition tolerance: In the right-most figure, the system does not have a network partition at first, and both status updates and queries can go smoothly. However, once a network partition occurs, it degenerates into one of the previous two conditions. It is thus proved that any distributed system cannot have consistency, availability, and partition tolerance all at the same time.

https://preview.redd.it/456x2blv7m851.png?width=1400&format=png&auto=webp&s=550797373145b8fc1471bdde68ed5f8d45adb52b
The discovery of the CAP theorem seems to declare that the aforementioned goals of the consensus protocol is impossible. However, if you’re careful enough, you may find from the above that those are all extreme cases, such as network partitions that cause the failure of information transmission, which could be rare, especially in P2P network. In the second case, the system rarely returns the same information with node 2, and the general practice is to query other nodes and return the latest status as believed after a while, regardless of whether it has received the request information of other nodes. Therefore, although the CAP theorem points out that any distributed system cannot satisfy the three attributes at the same time, it is not a binary choice, as the designer of the consensus protocol can weigh up all the three attributes according to the needs of the distributed system. However, as the communication delay is always involved in the distributed system, one always needs to choose between availability and consistency while ensuring a certain degree of partition tolerance. Specifically, in the second case, it is about the value that node 2 returns: a probably outdated value or no value. Returning the possibly outdated value may violate consistency but guarantees availability; yet returning no value deprives the system of availability but guarantees its consistency. Tendermint consensus protocol to be introduced is consistent in this trade-off. In other words, it will lose availability in some cases.
The genius of Satoshi Nakamoto is that with constraints of the CAP theorem, he managed to reach a reliable Byzantine consensus in a distributed network by combining PoW mechanism, Satoshi Nakamoto consensus, and economic incentives with appropriate parameter configuration. Whether Bitcoin's mechanism design solves the Byzantine Generals Problem has remained a dispute among academicians. Garay, Kiayias, and Leonardos analyzed the link between Bitcoin mechanism design and the Byzantine consensus in detail in their paper The Bitcoin Backbone Protocol: Analysis and Applications. In simple terms, the Satoshi Consensus is a probabilistic Byzantine fault-tolerant consensus protocol that depends on such conditions as the network communication environment and the proportion of malicious nodes' hashrate. When the proportion of malicious nodes’ hashrate does not exceed 1/2 in a good network communication environment, the Satoshi Consensus can reliably solve the Byzantine consensus problem in a distributed environment. However, when the environment turns bad, even with the proportion within 1/2, the Satoshi Consensus may still fail to reach a reliable conclusion on the Byzantine consensus problem. It is worth noting that the quality of the network environment is relative to Bitcoin's block interval. The 10-minute block generation interval of the Bitcoin can ensure that the system is in a good network communication environment in most cases, given the fact that the broadcast time of a block in the distributed network is usually just several seconds. In addition, economic incentives can motivate most nodes to actively comply with the agreement. It is thus considered that with the current Bitcoin network parameter configuration and mechanism design, the Bitcoin mechanism design has reliably solved the Byzantine Consensus problem in the current network environment.

Practical Byzantine Fault Tolerance, PBFT

It is not an easy task to design the Byzantine fault-tolerant consensus protocol in a semi-synchronous network. The first practically usable Byzantine fault-tolerant consensus protocol is the Practical Byzantine Fault Tolerance (PBFT) designed by Castro and Liskov in 1999, the first of its kind with polynomial complexity. For a distributed system with $n$ nodes, the communication complexity is $O(n2$.) Castro and Liskov showed in the paper that by transforming centralized file system into a distributed one using the PBFT protocol, the overwall performance was only slowed down by 3%. In this section we will briefly introduce the PBFT protocol, paving the way for further detailed explanations of the Tendermint protocol and the improvements of the Tendermint protocol.
The PBFT protocol that includes $n=3f+1$ nodes can tolerate up to $f$ Byzantine nodes. In the original paper of PBFT, full connection is required among all the $n$ nodes, that is, any two of the n nodes must be connected. All the nodes of the network jointly maintain the system status through network communication. In the Bitcoin network, a node can participate in or exit the consensus process through hashrate mining at any time, which is managed by the administrator, and the PFBT protocol needs to determine all the participating nodes before the protocol starts. All nodes in the PBFT protocol are divided into two categories, master nodes, and slave nodes. There is only one master node at any time, and all nodes take turns to be the master node. All nodes run in a rotation process called View, in each of which the master node will be reelected. The master node selection algorithm in PBFT is very simple: all nodes become the master node in turn by the index number. In each view, all nodes try to reach a consensus on the system status. It is worth mentioning that in the PBFT protocol, each node has its own digital signature key pair. All sent messages (including request messages from the client) need to be signed to ensure the integrity of the message in the network and the traceability of the message itself. (You can determine who sent a message based on the digital signature).
The following figure shows the basic flow of the PBFT consensus protocol. Assume that the current view’s master node is node 0. Client C initiates a request to the master node 0. After the master node receives the request, it broadcasts the request to all slave nodes that process the request of client C and return the result to the client. After the client receives f+1 identical results from different nodes (based on the signature value), the result can be taken as the final result of the entire operation. Since the system can have at most f Byzantine nodes, at least one of the f+1 results received by the client comes from an honest node, and the security of the consensus protocol guarantees that all honest nodes will reach consensus on the same status. So, the feedback from 1 honest node is enough to confirm that the corresponding request has been processed by the system.

https://preview.redd.it/sz8so5ly7m851.png?width=1400&format=png&auto=webp&s=d472810e76bbc202e91a25ef29a51e109a576554
For the status synchronization of all honest nodes, the PBFT protocol has two constraints on each node: on one hand, all nodes must start from the same status, and on the other, the status transition of all nodes must be definite, that is, given the same status and request, the results after the operation must be the same. Under these two constraints, as long as the entire system agrees on the processing order of all transactions, the status of all honest nodes will be consistent. This is also the main purpose of the PBFT protocol: to reach a consensus on the order of transactions between all nodes, thereby ensuring the security of the entire distributed system. In terms of availability, the PBFT consensus protocol relies on a timeout mechanism to find anomalies in the consensus process and start the View Change protocol in time to try to reach a consensus again.
The figure above shows a simplified workflow of the PBFT protocol. Where C is the client, 0, 1, 2, and 3 represent 4 nodes respectively. Specifically, 0 is the master node of the current view, 1, 2, 3 are slave nodes, and node 3 is faulty. Under normal circumstances, the PBFT consensus protocol reaches consensus on the order of transactions between nodes through a three-phase protocol. These three phases are respectively: Pre-Prepare, Prepare, and Commit:
  • The master node of the pre-preparation node is responsible for assigning the sequence number to the received client request, and broadcasting the message to the slave node. The message contains the hash value of the client request d, the sequence number of the current viewv, the sequence number n assigned by the master node to the request, and the signature information of the master nodesig. The scheme design of the PBFT protocol separates the request transmission from the request sequencing process, and the request transmission is not to be discussed here. The slave node that receives the message accepts the message after confirming the message is legitimate and enter preparation phase. The message in this step checks the basic signature, hash value, current view, and, most importantly, whether the master node has given the same sequence number to other request from the client in the current view.
  • In preparation, the slave node broadcasts the message to all nodes (including itself), indicating that it assigns the sequence number n to the client request with the hash value d under the current view v, with its signaturesig as proof. The node receiving the message will check the correctness of the signature, the matching of the view sequence number, etc., and accept the legitimate message. When the PRE-PREPARE message about a client request (from the main node) received by a node matches with the PREPARE from 2f slave nodes, the system has agreed on the sequence number requested by the client in the current view. This means that 2f+1 nodes in the current view agree with the request sequence number. Since it contains information from at most fmalicious nodes, there are a total of f+1 honest nodes that have agreed with the allocation of the request sequence number. With f malicious nodes, there are a total of 2f+1 honest nodes, so f+1represents the majority of the honest nodes, which is the consensus of the majority mentioned before.
  • After the node (including the master node and the slave node) receives a PRE-PREPARE message requested by the client and 2f PREPARE messages, the message is broadcast across the network and enters the submission phase. This message is used to indicate that the node has observed that the whole network has reached a consensus on the sequence number allocation of the request message from the client. When the node receives 2f+1 COMMIT messages, there are at least f+1 honest nodes, that is, most of the honest nodes have observed that the entire network has reached consensus on the arrangement of sequence numbers of the request message from the client. The node can process the client request and return the execution result to the client at this moment.
Roughly speaking, in the pre-preparation phase, the master node assigns a sequence number to all new client requests. During preparation, all nodes reach consensus on the client request sequence number in this view, while in submission the consistency of the request sequence number of the client in different views is to be guaranteed. In addition, the design of the PBFT protocol itself does not require the request message to be submitted by the assigned sequence number, but out of order. That can improve the efficiency of the implementation of the consensus protocol. Yet, the messages are still processed by the sequence number assigned by the consensus protocol for the consistency of the distributed system.
In the three-phase protocol execution of the PBFT protocol, in addition to maintaining the status information of the distributed system, the node itself also needs to log all kinds of consensus information it receives. The gradual accumulation of logs will consume considerable system resources. Therefore, the PBFT protocol additionally defines checkpoints to help the node deal with garbage collection. You can set a checkpoint every 100 or 1000 sequence numbers according to the request sequence number. After the client request at the checkpoint is executed, the node broadcasts messages throughout the network, indicating that after the node executes the client request with sequence number n, the hash value of the system status is d, and it is vouched by its own signature sig. After 2f+1 matching CHECKPOINT messages (one of which can come from the node itself) are received, most of the honest nodes in the entire network have reached a consensus on the system status after the execution of the client request with the sequence numbern, and then you can clear all relevant log records of client requests with the sequence number less than n. The node needs to save these2f+1 CHECKPOINTmessages as proof of the legitimate status at this moment, and the corresponding checkpoint is called a stable checkpoint.
The three-phase protocol of the PBFT protocol can ensure the consistency of the processing order of the client request, and the checkpoint mechanism is set to help nodes perform garbage collection and further ensures the status consistency of the distributed system, both of which can guarantee the security of the distributed system aforementioned. How is the availability of the distributed system guaranteed? In the semi-synchronous network model, a timeout mechanism is usually introduced, which is related to delays in the network environment. It is assumed that the network delay has a known upper bound after GST. In such condition, an initial value is usually set according to the network condition of the system deployed. In case of a timeout event, besides the corresponding processing flow triggered, additional mechanisms will be activated to readjust the waiting time. For example, an algorithm like TCP's exponential back off can be adopted to adjust the waiting time after a timeout event.
To ensure the availability of the system in the PBFT protocol, a timeout mechanism is also introduced. In addition, due to the potential the Byzantine failure in the master node itself, the PBFT protocol also needs to ensure the security and availability of the system in this case. When the Byzantine failure occurs in the master node, for example, when the slave node does not receive the PRE-PREPARE message or the PRE-PREPARE message sent by the master node from the master node within the time window and is thus determined to be illegitimate, the slave node can broadcast to the entire network, indicating that the node requests to switch to the new view with sequence number v+1. n indicates the request sequence number corresponding to the latest stable checkpoint local to the node, and C is to prove the stable checkpoint 2f+1 legitimate CHECKPOINT messages as aforementioned. After the latest stable checkpoint and before initiating the VIEWCHANGE message, the system may have reached a consensus on the sequence numbers of some request messages in the previous view. To ensure the consistency of these request sequence numbers to be switched in the view, the VIEWCHANGE message needs to carry this kind of the information to the new view, which is also the meaning of the P field in the message. P contains all the client request messages collected at the node with a request sequence number greater than n and the proof that a consensus has been reached on the sequence number in the node: the legitimate PRE-PREPARE message of the request and 2f matching PREPARE messages. When the master node in view v+1 collects 2f+1 VIEWCHANGE messages, it can broadcast the NEW-VIEW message and take the entire system into a new view. For the security of the system in combination with the three-phase protocol of the PBFT protocol, the construction rules of the NEW-VIEW information are designed in a quite complicated way. You can refer to the original paper of PBFT for more details.

https://preview.redd.it/x5efdc908m851.png?width=1400&format=png&auto=webp&s=97b4fd879d0ec668ee0990ea4cadf476167a2948
VIEWCHANGE contains a lot of information. For example, C contains 2f+1 signature information, P contains several signature sets, and each set has 2f+1 signature. At least 2f+1 nodes need to send a VIEWCHANGE message before prompting the system to enter the next new view, and that means, in addition to the complex logic of constructing the information of VIEWCHANGE and NEW-VIEW, the communication complexity of the view conversion protocol is $O(n2$.) Such complexity also limits the PBFT protocol to support only a few nodes, and when there are 100 nodes, it is usually too complex to practically deploy PBFT. It is worth noting that in some materials the communication complexity of the PBFT protocol is inappropriately attributed to the full connection between n nodes. By changing the fully connected network topology to the P2P network topology based on distributed hash tables commonly used in blockchain projects, high communication complexity caused by full connection can be conveniently solved, yet still, it is difficult to improve the communication complexity during the view conversion process. In recent years, researchers have proposed to reduce the amount of communication in this step by adopting aggregate signature scheme. With this technology, 2f+1 signature information can be compressed into one, thereby reducing the communication volume during view change.
submitted by coinexchain to u/coinexchain [link] [comments]

Plan To Recover Our Losses


Background on the Initiative

My name is Matt. I’ve lived in Calgary my whole life, and been running businesses and programming since I was 10 years old. I’m a recent graduate of the University of Calgary in a business and computer science double major, and I currently manage the software team (6 students) at a small Calgary IoT startup. My past business experiences include running a window cleaning franchise across 6 communities, a popular concession stand, and a free web hosting service with over 10,000 clients.
I first got involved with cryptocurrency in 2017, when we had the big run up. Prior to that, I’d done a ton of research but never actually invested. While my losses in Quadriga are significant, they’re nowhere near some of the losses I’ve been hearing about. I’m fortunate to be in a “walk away” position if I so choose and I more or less did for the first week. But I couldn’t stay away. It isn’t right. Especially not now when the solution is so close and the potential impact is so significant.
Quadriga Initiative is the result of 6-7 months of on and off brainstorming, collaboration, and iteration around the central goal of recovering what's been lost.
The money is almost certainly not accessible. (I'm pretty sure it would have been found already.) We'll all get something from the bankruptcy, and I appreciate the legal team and official committee working hard on our behalf, but I fear it won't even come close to making up for what was lost. For many people - their whole life savings. It's not a very satisfying recovery. It doesn't leave anyone whole. It leaves a lot of people behind.
Without funds to pull from, any full recovery solution has to center around creating new value. Entrepreneurs and business leaders are creating value every day, and this is where the idea comes from.
We take advantage of the fact we have a large affected user community, tons of economic bargaining power, and a vast network. Many in the business community were affected, know someone who was affected, or feel horrible about what happened. My discussions with business leaders have shown that they generally desire to make this right, and businesses regularly do "goodwill" donations or gestures for marketing. The Quadriga Initiative provides a way businesses can help easily and in a "win win" way by running token-accepting promotions. We then provide a competitive framework that helps to promote businesses which make the biggest impact, highly incentivizing a faster recovery.
At this stage, everything is more or less ready. We have a primary exchange partner, a growing team of affected users, and multiple business connections. What remains is the incredibly tough challenge of creating trust and understanding among a community that's been completely devastated in the worst way. This is no easy task.
We need your help! If things don't make sense, or you still have questions, or you don't understand something, please take the time to ask and reach out! In addition to commenting here, please feel free to chat with us on Telegram: https://t.me/QuadrigaInitiative



Where Does the Money Come From?

The money (value) comes out of the profit margin of businesses. Businesses normally sell a product or service at a profit over the cost of production. Instead, a business would sell the product or service at a discount (less profit), accepting tokens in place of the difference.
While this may seem generous, like the business is giving something away, it also benefits the business as well:
Once a successful marketplace is established, affected users will have a multitude of businesses where they can spend tokens and get good deals. As well, other consumers can buy the tokens at a discount (supporting affected users), then use them to save money.
The leaderboard and large affected user community give a strong advantage to businesses to participate and offer the best deals. Businesses that have recovered the most are rewarded with more people seeing their promotion (free advertising).



The Various Uses For Tokens

Our Partner Exchange: Tokens will be tradable and accepted at face value towards the trading fees on the partner exchange. A trader who wants to save money on trades can stock up on the tokens to gain a discount over other customers who don't bother. The tokens can be used towards 50%-100% of the trading fees depending on the calendar date. This means a heavy discount for affected users and is essentially a price segment for the exchange.
In addition, the primary exchange partner we have is looking into giving back a small portion (15%) of gross trading revenue towards cashing tokens. This is done to incentivize the affected user community to spread the word about the exchange.
Participating Businesses: Businesses in the community accept the tokens towards purchases to promote to Quadriga victims, supporters, and deal seekers. It functions similar to a discount, where the tokens are applied as a portion of the sale price, with a few additional advantages for the business:
Businesses sell promotions for tokens, and send the tokens to a burn address that encodes the business website URL. To further encourage business participation, a leaderboard is set up to promote those businesses which have burned the most tokens. The leaderboard is a useful place to go shopping if you have tokens. You can find businesses who take them and get the best deals. All information is on the blockchain, enabling anyone to set up a leaderboard or start accepting tokens.



Token Flow Diagram

The linked diagram is a handy visualization of the initiative and how the various parties interact:
https://www.quadrigainitiative.com/Quadriga%20Initiative%20Diagram.pdf
The complete initiative is a full marketplace, enabling the beneficial (win win) interaction of all parties and the gradual recovery of losses over time. The token supply is finite, limited by the amount of losses we can verify, and all tokens eventually get cashed for $1 worth of products/services (or primary exchange gross trading revenue) as the program runs.


Our Primary Exchange Partner

Since the primary exchange is handling validation and distributing the tokens, it's important they be trustworthy. Given the history with Quadriga, most affected users (including every member of our team) are legitimately concerned about anyone losing their funds again. This is the primary reason we've selected to work with TxQuick.


Proof of Reserves and Why It Matters

In case you missed them, so far this year we've seen 3 large scale exchange collapses:
Each one represents massive losses for those involved - hundreds and thousands of affected lives. These are real people and families at the other ends, with hopes and dreams, who worked hard for their money.
In the case of QuadrigaCX, it took the freezing of the bank accounts, the death/disappearance of the CEO, and concerted legal action to even realize it was insolvent.
Exchanges can easily continue to operate for years with whatever level of reserves they like. Third party audits are riddled with holes like:
On top of that - most exchange platforms still don't even bother to audit. Despite the warnings about storing funds on exchanges, people still do. And remember that many affected users weren't storing funds on Quadriga - they simply got stuck with no way to withdraw.
Proof of Reserves asks exchanges to:
What it doesn't prevent:
What it does prevent:
Check this link for more details on Proof of Reserves, including the full hash tree algorithm.
Despite the relative simplicity of publishing wallet keys, the vast selection of exchanges we have in Canada, and the many millions of dollars stored, not a single exchange has done so. The hash tree algorithm has existed since 2014. It's presently on one exchange (last audited in 2014).
We feel that Proof of Reserves is key to preventing future exchange collapses, which is why we are so pleased to have a primary exchange partner which will be implementing the full algorithm. While we can't control other exchanges, traders now have an option to use an exchange which proves full backing of all deposits and we hope this will encourage wider adoption and greater industry transparency.


Timeline for the Initiative

The initiative process breaks down into roughly 3 stages:
Pre-Claim Stage - We are working to save affected user balances for later validation, as well as determine if there is sufficient interest in the project. This is ongoing.
Exchange Stage - We bring the primary exchange online, and process claims. Recovery starts through exchange trading fee discounts and eventually gross trading revenue. The exchange platform is expected to launch within a few months.
Marketplace Stage - Once we have enough individuals with tokens, we bring in the first businesses from the wider community. After we have several initial businesses, the marketplace grows organically as more businesses sign up over time. This is approximately a year after launching the exchange.
Full recovery (all losses) is likely to take multiple years, anywhere from 2 to 25 years. There are a lot of factors to consider.


Verification of Claims

Accurately capturing losses is key. Businesses are interested in helping honest victims of a crime who had their money stolen from them, and not too interested in supporting any fraud. We've been working hard to make our process as easy as possible for affected users, while being as hard as possible for false claims (claiming wrong amounts, losses of others, or fake claims).


How To Sign Up

If you wish to participate, please sign up at https://www.quadrigainitiative.com/.
You can do a pre-claim to save your balance, or an email only sign up just to show interest and get the launch email.



How You Can Help

We are stronger together!


Thanks so much!
submitted by azoundria2 to QuadrigaCX [link] [comments]

Quadriga Initiative - Additional Information and Clarifications

Quadriga Initiative - Additional Information and Clarifications

Introduction / Summary

The Quadriga Initiative is an independent process where affected users and businesses in the community work together to recover losses from QuadrigaCX. An exchange (the primary exchange) will verify claims and distribute free tokens representing losses. Tokens will be accepted at the primary exchange and by participating businesses at face value. There is a white paper here with more detail:
https://quadrigainitiative.com/Quadriga%20Initiative.pdf
If you wish to participate in the Quadriga Initiative and receive free tokens representing your loss, there is a pre-claim process now open. A pre-claim uses your QCX client ID, first name as registered on the QCX platform, and a valid email address to copy your balance information and associate it with your email address.
https://quadrigainitiative.com/
Although a personal email will work, it is recommended for privacy and security to set up a new "forwarder" email account that doesn't personally identify you, with a unique password. Make sure that whatever email process you set up is one which still works to reach you in a few months time.
  • We are a community initiative which is not connected with the bankruptcy process. Participation does not impact your bankruptcy claim. You can find the official bankruptcy information on the Miller Thompson website.
  • We have taken all reasonable measures to protect our website and stored data against SQL injection. The website back-end is simple, all input is sanitized, and all access passwords are 16+ character full random. (I have a background in web hosting.)
  • There is no cost to participate and the pre-claim process takes approximately 3 minutes.
  • Please be sure to keep a copy of your bankruptcy claim paperwork for later validation!


Background on the Initiative

My name is Matt. I’ve lived in Calgary my whole life, and been running businesses and programming since I was 10 years old. I’m a recent graduate of the University of Calgary in a business and computer science double major, and I currently manage the software team (6 students) at a small Calgary IoT startup. My past business experiences include running a window cleaning franchise across 6 communities, a popular concession stand, and a free web hosting service with over 10,000 clients.
I first got involved with cryptocurrency in 2017, when we had the big run up. Prior to that, I’d done a ton of research but never actually invested. While my losses in Quadriga are significant, they’re nowhere near some of the losses I’ve been hearing about. I’m fortunate to be in a “walk away” position if I so choose and I more or less did for the first week. But I couldn’t stay away. It isn’t right. Especially not now when the solution is so close and the potential impact is so significant.
Quadriga Initiative is the result of 6-7 months of intense brainstorming, collaboration, and perpetual iteration around the central problem of how to recover what's been lost.
The money is almost certainly not accessible. (I'm pretty sure it would have been found already.) We'll all get something from the bankruptcy, but for most of us I fear it won't really make up for what was lost. For many people - their whole life savings. It's not a very satisfying recovery. It doesn't leave anyone whole. It leaves a lot of people behind.
Without funds to pull from, any full recovery solution has to center around creating new value. Entrepreneurs and business leaders are creating value every day, and this is where the idea comes from.
We take advantage of the fact we have a large affected user community, tons of economic bargaining power, and a vast network. Many in the business community were affected, know someone who was affected, or feel horrible about what happened. My discussions with business leaders have shown that they generally desire to make this right, and businesses regularly do "goodwill" donations or gestures for marketing. The Quadriga Initiative provides a way businesses can help easily and in a "win win" way by running token-accepting promotions. We then provide a competitive framework that helps to promote businesses which make the biggest impact, highly incentivizing a faster recovery.
At this stage, everything is more or less ready to launch. We have a primary exchange partner, a small team of affected users, and multiple business connections. What remains is the incredibly tough challenge of creating trust and understanding among a community that's been completely devastated in the worst way. This is no easy task.
We need your help! If things don't make sense, or you still have questions, or you don't understand something, please take the time to ask and reach out! In addition to commenting here, please feel free to chat with us on Telegram: https://t.me/QuadrigaInitiative



Where Does the Money Come From?

The money (value) comes out of the profit margin of businesses. Businesses normally sell a product or service at a profit over the cost of production. Instead, a business would sell the product or service at a discount (less profit), accepting tokens in place of the difference.
While this may seem generous, like the business is giving something away, it also benefits the business as well:
  • The business can get additional sales. Even though the profit per sale is less, the business still makes profit on those additional sales.
  • The business can find new customers. Even if a business sells a product or service "at cost" (meaning zero profit), they've established a relationship. The customer may buy other products or services in the future, or it could be part of a subscription.
  • The business is seen positively as "giving back", creating a better future, helping fraud victims, etc...
Once a successful marketplace is established, affected users will have a multitude of businesses where they can spend tokens and get good deals. As well, other consumers can buy the tokens at a discount (supporting affected users), then use them to save money.
The leaderboard and large affected user community give a strong advantage to businesses to participate and offer the best deals. Businesses that have recovered the most are rewarded with more people seeing their promotion (free advertising).



The Various Uses For Tokens

The Primary Exchange: Tokens will be tradable and accepted at face value towards the trading fees on the primary exchange. A trader who wants to save money on trades can stock up on the tokens to gain a discount over other customers who don't bother. The tokens can be used towards 50%-100% of the trading fees depending on the calendar date. This means a heavy discount for affected users and is more or less a price segment for the exchange.
In addition, the primary exchange partner we have at the moment is looking into giving back a small portion (15%) of gross trading revenue towards cashing tokens. This is done to incentivize the affected user community to spread the word about the exchange.
Participating Businesses: Businesses in the community accept the tokens towards purchases to promote to Quadriga victims, supporters, and deal seekers. It functions similar to a discount, where the tokens are applied as a portion of the sale price, with a few additional advantages for the business:
  • It price segments. The business doesn't lose revenue on customers who would have paid full price. With a 20% discount, the business loses revenue on some customers who would have bought anyway. Nobody likes to throw away free money.
  • It can run continuously. A 20% discount running continuously would mean the perceived value of the product would just be 20% less. A promotion accepting tokens can run long-term, enabling the business to attract more customers with less effort.
  • It's a give-back play, showing the business is caring about the wider community, and maybe has a larger agenda than pure profits. (ie Trying to create a better future.)
Businesses sell promotions for tokens, and send the tokens to a burn address that encodes the business website URL. To further encourage business participation, a leaderboard is set up to promote those businesses which have burned the most tokens. The leaderboard is a useful place to go shopping if you have tokens. You can find businesses who take them and get the best deals. All information is on the blockchain, enabling anyone to set up a leaderboard or start accepting tokens.



Token Flow Diagram

The following diagram is a handy visualization of the initiative and how the various parties interact:
Quadriga Initiative Diagram
The complete initiative is a full marketplace, enabling the beneficial (win win) interaction of all parties and the gradual recovery of losses over time. The token supply is finite, limited by the amount of losses we can verify, and all tokens eventually get cashed for $1 worth of products/services (or primary exchange gross trading revenue) as the program runs.


Our Primary Exchange Partner

Since the primary exchange is handling validation and distributing the tokens, it's important they be trustworthy. Given the history with Quadriga, most affected users (including every member of our team) are legitimately concerned about anyone losing their funds again. This is the primary reason we've selected to work with TxQuick.
  • TxQuick is being developed by Ethan Burnside, who has demonstrated his integrity in 2012-2013 when he ran BTC Trading Corp. When it was shut down, he spent significant personal funds to keep it running so everyone could get their money out - likely the only time in history that an exchange shut down and everyone got their funds. You can learn more about him from his post here.
  • We've had extensive discussions on Telegram about security. Ethan is open, transparent, and extremely knowledgeable. He has invested heavily in developing a system of secure multi-sig wallets. His previous exchange was never successfully hacked. If you have any questions, Ethan is happy to answer them!
  • Ethan is strongly in favour of publishing wallet public keys. The exchange will feature a full transparency page to allow anyone to see that all funds are fully backed. In the future, a full proof of reserves will be deployed to assure all customers that their balances are represented.
  • In addition to the token validation/verification function:
    • TxQuick will be the first platform to allow buying and selling of the tokens.
    • TxQuick proposes to accept the tokens at face value towards trading fees on the exchange. Affected users can use tokens to get free or discounted trading (50%+ off).
    • TxQuick will also handle a slow token payback, enabling tokens to be exchanged 1:1 for cash over time using 15% of gross trading revenue.
  • This proposal is subject to approval by the TxQuick board. It could be changed. There is a necessary interest level from the affected user community of at least 1,000 sign-ups.
  • While it might seem like Ethan is being super generous and giving a lot away for free, again this is mutually beneficial (win win). Here are some of the benefits to the primary exchange:
    • Lots of sign-ups from affected users and, later, interested consumers, many of whom will stay to use the platform. Ethan desires to achieve a dominant position in the Canadian marketplace.
    • The token program provides an effective price segment, increasing revenue over time. (Low prices = lost profit, high prices = less customers, price segment = more profit and customers.)
    • Customers with recovered funds are likely to be more loyal and prefer the platform, and the profit share incentivizes spreading the word about the platform. (Interests are aligned.)
  • It is not required to use the primary exchange platform for trading or deposit any money. You are free to sign up, receive your free tokens, and continue trading on any other platform or just use the marketplace.


Proof of Reserves and Why It Matters

In case you missed them, so far this year we've seen 3 large scale exchange collapses:
  • QuadrigaCX
  • EZ-BTC
  • Cryptopia
Each one represents massive losses for those involved - hundreds and thousands of affected lives. These are real people and families at the other ends, with hopes and dreams, who worked hard for their money.
In the case of QuadrigaCX, it took the freezing of the bank accounts, the death/disappearance of the CEO, and concerted legal action to even realize it was insolvent.
Exchanges can easily continue to operate for years with whatever level of reserves they like. Third party audits are riddled with holes like:
  • How can they possibly know the client list they're given is legitimate and fully inclusive?
  • How can you know the funds weren't borrowed for the audit purposes?
  • How old is the report? How can you trust the auditor?
On top of that - most exchange platforms still don't even bother to audit. Despite the warnings about storing funds on exchanges, people still do. And remember that many affected users weren't storing funds on Quadriga - they simply got stuck with no way to withdraw.
Proof of Reserves asks exchanges to:
  • Publish the wallet public keys so people can see that funds are fully backed. (A satoshi test can prove ownership of those wallets.)
  • Publish a hash tree to let each customer validate that their balance is included in the total.
What it doesn't prevent:
  • Same as presently, if funds are not secured in proper multi-sig wallets or multiple exchange operators are corrupt, the funds could still be taken, up to what's stored. However, this would be immediately known to everyone instead of revealed whenever admins felt like it (or never).
  • The balances of customers who never check the hash tree could be excluded by a dishonest exchange, which wouldn't be noticed until one of those customers decided to check.
  • A dishonest exchange could still dispute the balance of a customer or arbitrarily prevent withdrawals. In this case, the customer and exchange would have to sort that out.
  • A dishonest exchange could pretend to own wallets it doesn't. A satoshi test would help with this, where the exchange operators send a small amount at a specified time.
  • While it makes things safer, it's still not a good idea to store funds on the exchange.
What it does prevent:
  • The exchange owner can't spend funds of active customers, and still claim to hold them.
    • ie QuadrigaCX, EZ-BTC
  • The exchange owner can't conceal if funds are hacked or stolen. It becomes known immediately.
    • ie Mt. Gox, Cryptopia
  • Anyone can see if the exchange is solvent before trading.
    • ie Anyone with "bad timing" using an insolvent exchange.
Check this link for more details on Proof of Reserves, including the full hash tree algorithm.
Despite the relative simplicity of publishing wallet keys, the vast selection of exchanges we have in Canada, and the many millions of dollars stored, not a single exchange has done so. The hash tree algorithm has existed since 2014. It's presently on one exchange (last audited in 2014).
We feel that Proof of Reserves is the key to preventing future exchange collapses, which is why we are so pleased to have a primary exchange partner which will be implementing the full algorithm. While we can't control other exchanges, traders now have an option to use an exchange which proves full backing of all deposits and we hope this will encourage wider adoption and greater industry transparency.


Timeline for the Initiative

The initiative process breaks down into roughly 3 stages:
Pre-Claim Stage - We are working to save affected user balances for later validation, as well as determine if there is sufficient interest in the project. This is ongoing.
Exchange Stage - We bring the primary exchange online, and process claims. Recovery starts through exchange trading fee discounts and eventually gross trading revenue. The exchange platform is expected to launch within a few months.
Marketplace Stage - Once we have enough individuals with tokens, we bring in the first businesses from the wider community. After we have several initial businesses, the marketplace grows organically as more businesses sign up over time. This is approximately a year after launching the exchange.
Full recovery (all losses) is likely to take multiple years, anywhere from 3 to 25 years. My best estimate would be 10 years, although there are a lot of factors to consider.


Verification of Claims

Accurately capturing losses is key. Businesses are interested in helping honest victims of a crime who had their money stolen from them, and not that interested in supporting any fraud. We've been working hard to make our process as easy as possible for affected users, while being as hard as possible for false claims (claiming wrong amounts, losses of others, or fake claims).
  • Our ideal verification is based on:
  • If we don't have all the information, or there are problems, claims may be limited or rejected. This is at our full discretion, along with our primary exchange partner.
  • The user balance website is available to confirm balances for a limited time. It could go offline as early as August 31st. Once it goes offline, pre-claims will no longer be possible. As no list of claimants is being published through the bankruptcy, and paperwork can easily be manipulated, larger balances will then have to be validated through the courts.
  • Anyone with a balance on Quadriga can create a pre-claim by providing:
    • Client ID and first name for the purposes of saving the total which you had.
    • An email address for a future launch announcement (which can be a forwarder).


How To Sign Up

If you wish to participate, please sign up at https://www.quadrigainitiative.com/.
You can do a pre-claim to save your balance, or an email only sign up just to show interest and get the launch email.

  • We are a community initiative which is not connected with the bankruptcy process. Participation does not impact your bankruptcy claim. You can find the official bankruptcy information on the Miller Thompson website.
  • We have taken all reasonable measures to protect our website and stored data against SQL injection. The website back-end is simple, all input is sanitized, and all access passwords are 16+ character full random. (I have a background in web hosting.)
  • There is no cost to participate and the pre-claim process takes approximately 3 minutes.
  • Please be sure to keep a copy of your bankruptcy claim paperwork for later validation!


How You Can Help

We are stronger together!
  • Get yourself to a solid level of understanding of what we are doing by asking any questions or giving any feedback if anything doesn't make sense. This is the biggest thing!
  • Send in your pre-claim or do an email-only signup. (Every sign-up helps show interest.)
  • Upvote.
  • Share on social media.
  • Let us know your ideas/thoughts!
  • Join our Telegram group. Come meet our team!
  • Help us get the word out. Tell your friends.


Thanks so much!
submitted by azoundria2 to BitcoinCA [link] [comments]

Trading Futures & Bitcoin - Add algo logic to keep track ... Bitcoin Algorithmic Trading Course + 99 Trading Robots ... Trading Futures & Bitcoin - Learn how to locate instrument ... Bitcoin Trading Bot (Tutorial) - YouTube Algorithmic Trading: The Basics (Part 1) - YouTube

When mining bitcoin, the hashcash algorithm repeatedly hashes the block header while incrementing the counter & extraNonce fields. Incrementing the extraNonce field entails recomputing the merkle tree, as the coinbase transaction is the left most leaf node. The block is also occasionally updated as you are working on it. Bitcoin (BTCUSD) is now entering strongly bullish territory with prices trading back above EMA10. In this chart, we are going to be looking at mainly the Fib. levels. Based on fib. proportions, our next immediate resistance is sitting at $13520. This level coincides with the June 2019 peak at $13764. This price range of $13520 and $13764 becomes the 'LAST... For this tutorial, we will be trading the cryptocurrency Bitcoin Cash on the exchange Coinbase Pro. In order to do this yourself, you will need a Coinbase pro account which you can get here . Bitcoin (BTC) historische und Live-Preis-Charts von allen Börsen. Finden Sie alle zugehörigen Kryptowährungsinformationen und lesen Sie mehr über Bitcoin's neuesten Nachrichten. N ot too long ago, we delved into the usage of Machine Learning models to predict the future prices of Bitcoin. There we used two time series models to forecast the direction in which the price of Bitcoin may go in the next few days or weeks. It was pretty straightforward in regards to training and fitting the model to Bitcoin’s historical price data.

[index] [5424] [8684] [9481] [12608] [51051] [4387] [20187] [43114] [24275] [37508]

Trading Futures & Bitcoin - Add algo logic to keep track ...

Please subscribe to this channel and click the bell for updates. Use the Existing Order block to pull information from a live order and pass that infomation ... Please subscribe to this channel and click the bell for updates. Hover over an instrument in Market Grid and use the simultaneous keystroke of “Shift+Ctrl+X”... In this video we explain in detail how you can create, backtest and deploy your own automated trading algorithm in no time on the CRIX platform. Completely n... Cryptocurrency can be a high-risk, high-reward game for those willing to deal with the volatility. Can we use AI to help us make predictions about Bitcoin's ... Please subscribe to this channel and click the bell for updates. Quickly launch multiple instances of the same algo across different accounts with the Autotr...

#