# How Decentralized Key Generation works?

All operations that can be performed through the Arcana SDK have to be signed with a user's private key in order to verify their authenticity. In order to create and manage such a public key infrastructure Arcana implements its own secret sharing scheme.

In order to understand the Asynchronous Verifiable Secret Sharing model implemented in the SDK one first needs to understand Shamir's Secret Sharing (SSS). The problem with having highly sensitive information like a private key is that storing it one location leads to a single point of failure. For example, if the node on which it is stored is corrupted or turns malicious then that private key is essentially compromised. If that is the case how does one store keys in a distributed manner so that no one node has access to the whole key and a threshold number of nodes are required to reconstruct the original key? SSS was one of the first secret sharing models proposed by Shamir *et al. [1] that solved this problem. Here they were able to share a private key with a group of participants and reconstruct it whenever a threshold number of participants presented their shares.

At its core Shamir's secret sharing is based on the idea that in order to reconstruct a polynomial of degree $n$, one requires $n + 1$ points that lie on the polynomial. For example, in order to reconstruct a line one requires at least two points that lie on it. SSS works by generating a polynomial of degree $t$ and letting the point at which it intercepts the y-axis be the secret. Then each participant is given the coordinates of a point on this polynomial (suppose there are $m$ participants where $t < m$). In order to successfully reconstruct the polynomial and therefore subsequently the secret (by calculating the y-intercept of the reconstructed polynomial), one requires the shares from at least $t$ participants.

Note, once the $t$ points that lie on the polynomial are obtained, the polynomial can be reconstructed through Lagrange Interpolation.

One of the drawbacks with SSS is the fact that at no point can any participant verify whether their shares are correct or if they are being lied to by the dealer. In order to address this shortcoming, Verifiable Secret Sharing was developed. Here, the participants go through phases of sharing & complaining to confirm that all participants' shares are in fact valid. If at any point during the protocol they find that the share is invalid, participants of the scheme can raise a complaint.

Verifiable Secret Sharing assumes that the nodes participating in the scheme can communicate which each other synchronously. In practice though there are a few problems with making this assumption. First and foremost, for systems distributed over a large area, it is not possible to maintain a global clock that is consistently in sync for all participants to use. Even if one were to assume the existence of phases where members can send signals synchronously, sending messages between phases will inevitably involve delays. This would lead to participants going out of sync and therefore break the synchronicity assumption for the whole system.

In [2] Cachin et al. put forward a general & practically efficient asynchronous verifiable secret sharing protocol. This is what has been implemented in this SDK. Fundamentally it works by having a single dealer generating and distributing bi-variate polynomials to participants in order for them to generate and store key shares from the same polynomials. Using this AVSS protocol one is able to share secrets with a message complexity of O($n^2)$ (number of messages sent) and O($kn^3)$ communication complexity (number of bits transferred) where $k$ is a security parameter corresponding to the size of the secret. The optimal resiliency condition, the condition for which the system remains secure, is $n = 3t + 1$. In other words as long as $t < n/3$ the system will be secure.

In the current implementation these public-private key pairs are assigned to a user of the dApp when they register for the first time. Therefore, a mapping is generated between a combination of [user identifier, OAuth protocol] & the private key. In order to ensure that this process itself does not take that long during user registration the DKG (Distributed Key Generation) nodes generate and maintain a buffer of 200 keys. Whenever a new registration event takes place one of these key pairs are assigned to the new user to avoid latency issues that would come up if we were to generate keys on the fly.

Developers also have the option of aggregating logins of those users who sign in to the same dApp through different accounts. As long as the identifier for the OAuth protocol is the same (for example the email ID) those users will be considered the same. This setting to aggregate the logins will only be available at the time of app creation.

[1] http://web.mit.edu/6.857/OldStuff/Fall03/ref/Shamir-HowToShareASecret.pdf

[2] Link to paper