Research publications thread

Link any research publications in this thread with a short description.


Author(s): Ekleen Kaur and Gokul Alex

Time tokenisation functions embedded and enmeshed in the topological transitions of computationally complex curves are at the heart of advanced and analytic proofs of knowledge such as zkSNARKs, zkSTARKs etc. This is the crux and gist of the research that Ekleen and myself (Gokul) have undertaken in the last couple of years through numerous prototypes and protocols in conjunction with distributed ledgers and dynamic data marketplaces.

We are digitally delighted to present the summary of our research in the following article published in arXiv, a think tank supported by Cornell University! Please read and review this research and share your perspectives and pointers!

[2108.06389] Time Transitive Functions for Zero Knowledge Proofsstrong text

Title: High-Frequency Trading on Decentralized On-Chain Exchanges
Authors: Liyi Zhou, Kaihua Qin , Christof Ferreira Torres †, Duc V Le ‡ and Arthur Gervais

Decentralized exchanges (DEXs) allow parties to participate in financial markets while retaining full custody of their funds. However, the transparency of blockchain-based DEX in combination with the latency for transactions to be processed, makes market-manipulation feasible. For instance, adversaries could perform front-running — the practice of exploiting (typically non-public) information that may change the price of an asset for financial gain. In this work we formalize, analytically exposit and empirically evaluate an augmented variant of frontrunning: sandwich attacks, which involve front- and back-running victim transactions on a blockchain-based DEX. We quantify the probability of an adversarial trader being able to undertake the attack, based on the relative positioning of a transaction within a blockchain block. We find that a single adversarial trader can earn a daily revenue of over several thousand USD when performing sandwich attacks on one particular DEX — Uniswap, an exchange with over 5M USD daily trading volume by June 2020. In addition to a single-adversary game, we simulate the outcome of sandwich attacks under multiple competing adversaries, to account for the real-world trading environment.

1 Like

Title: A Note on Privacy in Constant Function Market Makers
Authors: Guillermo Angeris, Alex Evans, Tarun Chitra

Constant function market makers (CFMMs) such as Uniswap, Balancer, Curve, and mStable, among many others, make up some of the largest decentralized exchanges on Ethereum and other blockchains. Because all transactions are public in current implementations, a natural next question is if there exist similar decentralized exchanges which are privacy-preserving; i.e., if a transaction’s quantities are hidden from the public view, then an adversary cannot correctly reconstruct the traded quantities from other public information. In this note, we show that privacy is impossible with the usual implementations of CFMMs under most reasonable models of an adversary and provide some mitigating strategies.

1 Like

Title: Privacy-preserving Distributed Machine Learning via Local Randomization and ADMM Perturbation
Authors: Xin Wang, Student Member, IEEE, Hideaki Ishii, Senior Member, IEEE, Linkang Du, Student Member, IEEE, Peng Cheng, Member, IEEE, and Jiming Chen, Fellow, IEEE

With the proliferation of training data, distributed machine learning (DML) is becoming more competent for large-scale learning tasks. However, privacy concerns have to be given priority in DML, since training data may contain sensitive information of users. In this paper, we propose a privacy-preserving ADMM-based DML framework with two novel features: First, we remove the assumption commonly made in the literature that the users trust the server collecting their data. Second, the framework provides heterogeneous privacy for users depending on data’s sensitive levels and servers’ trust degrees. The challenging issue is to keep the accumulation of privacy losses over ADMM iterations minimal. In the proposed framework, a local randomization approach, which is differentially private, is adopted to provide users with self-controlled privacy guarantee for the most sensitive information. Further, the ADMM algorithm is perturbed through a combined noise-adding method, which simultaneously preserves privacy for users’ less sensitive information and strengthens the privacy protection of the most sensitive information. We provide detailed analyses on the performance of the trained model according to its generalization error. Finally, we conduct extensive experiments using real-world datasets to validate the theoretical results and evaluate the classification performance of the proposed framework.