Written by
0xIchigo
Published on
August 2, 2024
Copy link

Zero-Knowledge Proofs: Its Applications on Solana

A big thanks to Matt, Porter, Nick, Swen, and bl0ckpain for reviewing the articles in this series.

Introduction

This is the second article in an introductory series on zero-knowledge proofs. I highly recommend reading Zero-Knowledge Proofs: An Introduction to the Fundamentals before reading this article as it provides the necessary context of the underlying theory, mathematics, and cryptography that frames this article’s analysis. This article assumes knowledge thereof, so, if you are unfamiliar with zero-knowledge proofs, I highly recommend reading this article afterwards. The goal of this article is to equip the with the requisite knowledge to contribute to the discussion of zero-knowledge proofs on Solana, as well as even go down the path of developing new, innovative zero-knowledge primitives.

And so, with this newfound knowledge, we can finally ask what are zero-knowledge proofs? How are they used on Solana?

So, What Are Zero-Knowledge Proofs?

Now that we finally have the necessary theory, math, and cryptography under our belt, it’s only appropriate to ask: what exactly is a zero-knowledge proof?

A zero-knowledge proof is a cryptographic process wherein one party is able to prove to another party that a certain statement is true without revealing any additional information apart from the fact that the statement is indeed true. These proofs must be statistically sound, complete, and secure from information leakage.

There are two types of statements one would want to prove in zero-knowledge. At a high level,

  • Statements about facts (e.g., this specific graph has a three-coloring)
  • Statements about knowledge (e.g., I know the factorization of N)

The first statement, ultimately, concerns some intrinsic property of the universe — something that is fundamentally true such as 1 + 1 = 2. The second statement is referred to as a proof of knowledge. That is, it goes beyond the mere proof that something is true and relies on what the Prover knows.

As mentioned previously, in order to prove these statements, our proofs can be interactive or non-interactive. In an interactive zero-knowledge proof, the Prover and the Verifier engage in a series of rounds until the Verifier is convinced beyond a reasonable doubt. Zcash uses non-interactive zero-knowledge proofs to allow users to make anonymous transactions. 

In contrast, the proof is delivered offline without any direct communication between the Prover and Verifier for non-interactive zero-knowledge proofs. Here, the Prover generates a proof that encapsulates all the necessary information, and the Verifier can verify the proof independently without any further interaction. Filecoin uses non-interactive zero-knowledge proofs to prove that users have stored data without revealing the data itself. 

 zk-SNARKs and Circuits

Adapted from Zero-knowledge demystified - it’s not magic, it’s technology by Mauricio Magaldi and Olga Hryniuk

A zk-SNARK is a Succinct Non-interactive ARgument of Knowledge. These zero-knowledge proofs are particularly efficient and compact, or succinct. A proof is considered succinct if both the size of the proof and the time required to verify it grow slower than the computation to be verified. Thus, if we want a succinct proof, we can’t have the verifier do some work per round of hashing, or the verification time will be proportional to the computation. Succinct proofs are possible thanks to polynomials and the Fiat-Shamir heuristic.

Regardless of the complexity of the statement trying to be proven, zk-SNARKs keep proof sizes small and relatively quick to verify. This is what makes zk-SNARKs so attractive for rollups; here, we have a method to prove that all the transactions for a given L2 are valid, and the proof is small enough to be verified on the L1. Protocols like Mina go one step further and use recursive proofs, which allows them to verify the entire history of the chain with a constant-size proof. Pickles is Mina’s new proof system and associated toolkit that is the first deployed zk-SNARK capable of recursive composition without a trusted setup. Note that by recursive composition we are referring to circuits.

Circuits describe the computation that you want to prove. They are a sequence of mathematical operations that takes some inputs to produce outputs. Circuits are used in zero-knowledge proofs to represent computations that have been performed correctly without revealing the inputs to the computation. The general workflow is as follows:

  • Writing and Compiling the Circuit — We need to write and compile the circuit. This could be as simple as creating an arithmetic circuit over a finite field defined by prime p = 7 and the computation x * y = z. The general idea here is to represent a computation to be proved as a set of constraints among variables, reduce these constraints to polynomial equations, and write code that maps all the values one way to a new algebraic structure (i.e., homomorphism). The compilation process will output several artifacts, including the set of constraints defined by the circuit and a script or binary to be used in subsequent steps
  • Trusted Setup Ceremony — It may be necessary to run a ceremony depending on the type of zk-SNARK used to generate the proving and verifying keys
  • Executing the Circuit — A circuit must be executed using the script or binary generated during compilation as if it were some type of program. The user would enter the public and private inputs, and then values for all intermediate and output variables would be calculated. The witness, also known as a trace, is the record of all of the computation’s steps
  • Generating the Proof — Given the proving key in the second step and the witness in the third step, the prover can generate a zero-knowledge proof that all the constraints defined in the circuit hold while only revealing the output value. This proof is sent to the verifier
  • Verifying the Proof — The verifier verifies the proof is correct for its public output using the submitted proof and the verifying key

Certain types of zk-SNARKs, such as Groth16, require a trusted setup for each circuit. This can hinder you since you’d need to run a new ceremony for every new program. Other zk-SNARKs, such as PlonK, only require one universal trusted setup simplifying this entire process. And other types of zero-knowledge proofs, such as zk-STARKs, eliminate the need for a trusted setup entirely.

zk-STARKs

A zk-STARK is a Scalable Transparent ARgument of Knowledge. zk-STARKs were invented by StarkWare and first proposed in this 2018 paper as an alternative to zk-SNARKs. Essentially, they allow blockchains to move computations to a single off-chain STARK prover and verify the integrity of those computations using an on-chain STARK Verifier. 

zk-STARKs are considered zero-knowledge because the inputs used by the off-chain prover are not exposed to the blockchain, maintaining the user’s privacy. zk-STARKs are scalable because moving computations off-chain reduces L1 verification costs significantly. Their proofs also scale linearly, whereas zk-SNARKs only scale quasilinearly. Moreover, zk-STARKs do not rely on elaborate trusted setup ceremonies, which they argue are susceptible to toxic waste — invalid proofs that could be accepted by verifiers if the ceremony was not conducted properly. Instead, zk-STARKs use publicly verifiable randomness to set up interactions between provers and verifiers. And, these proofs can only be generated by an off-chain prover that actually executed the computation, along with the auxiliary inputs required by it.

zk-STARKs address the limitations of zk-SNARKs by being scalable, transparent arguments of knowledge. They also come with much simpler cryptographic assumptions, completely avoiding the need for elements such as elliptic curves. Instead, they rely purely on hashes and information theory, making them quantum-resistant. However, the size of the proof is in the realm of a few hundred kilobytes, which can limit their feasibility in environments with limited bandwidth or storage, such as blockchains.

Note some more complex proving setups and zkVMs will use a combination of proving zk-STARKs recursively into a zk-SNARK just for the final verification step.

ZK Compression

The State Growth Problem

One of Solana’s most pressing issues is the state growth problem. To contextualize this problem, Solana’s state is stored on the disks of full nodes in the Accounts DB. This is a key-value store, where each entry into the database is known as an account. Accounts have addresses of 32 bytes each, and the amount of data an account can store varies between 0 and 10 MB. Currently, storing 10 MB of data costs roughly 70 SOL, regardless if that’s spread across one 10 MB account or one thousand 10 KB accounts. Each day, roughly one million new accounts are added to the chain, which brings the total state to over 500 million accounts, according to Toly’s post on the state growth problem. This will pose several challenges as Solana continues to grow, namely unbounded snapshot size, PCI bandwidth limitations, account indexing, and expensive memory and disk management. 

The current full snapshot size is roughly 70 GB, which is manageable with current hardware. However, continuous growth will inevitably lead to inefficiencies in state management and potential bottlenecks. As the snapshot size increases, the time required to cold boot a new system after a hardware failure becomes significantly longer, which could be detrimental in the case of a network restart. 

Peripheral Component Interconnect (PCI) bandwidth refers to the data transfer rate between the CPU and peripheral devices, such as graphics cards, network cards, and storage devices. PCI Express (PCIe) is a high-speed interface standard design to replace the older PCI standard and provide higher data transfer rates. The latest PCI bandwidth can reach speeds of 1 TB, or 128 GB/s. While this sounds a lot, it isn’t within the context of Solana. If a transaction reads or writes 128 MB then 128 GB/s PCI bandwidth would limit Solana to 1000 transactions per second (TPS). However, most transactions access recent memory that has already been loaded and cached into a validator’s RAM. Nevertheless, efficient state memory is crucial to ensure high throughput can be maintained while Solana scales, or else this bandwidth can become a limiting factor quite quickly.

Each validator needs to maintain an index of all existing accounts. This is because creating a new account requires proof that the account doesn’t already exist. With the total state of over 500 million accounts, even a minimal index (i.e., a 32-byte key and a 32-byte data hash per entry) would require around 32 GB of RAM. This state storage is expensive and needs to be managed carefully to prevent performance degradation. As Solana’s state continues to grow, the distinction between using fast, expensive memory (i.e., RAM) for certain operations and slower, cheaper memory (i.e., disk) becomes crucial.

Transactions and State Growth

Every Solana transaction needs to specify all the accounts it reads and writes to. Transactions are currently capped at 1232 bytes and must include the following:

  • Header (3 bytes)
  • Signatures (64 bytes each)
  • Account addresses (32 bytes each)
  • Instruction data (arbitrary size)
  • Recent blockhash (32 bytes)

The following occurs when a transaction is executed:

  • Sanity Checks — Only recent transactions are valid, and deduplication, structural verification, fees, and signature checks are performed on the transaction
  • Program Loading — The program bytecode is loaded based on the program address, and the Solana Virtual Machine (SVM) is instantiated
  • Account Loading — All accounts referenced by the transaction are checked and loaded from storage into memory and passed to the SVM
  • Execution — The program bytecode is executed
  • Syncing — Any accounts that are modified are synced back into storage

This lifecycle poses several challenges as state continues to grow. Namely, on-chain state is expensive, and having more accounts stored on disk leads to larger snapshots and indexes. Additionally, not all accounts are accessed frequently, making it inefficient to incur a continuing resource cost on them.

Simplifying State Management with ZK Compression

Instead of storing all accounts on disk and reading them when needed, a transaction can pass the account data as part of the transaction payload. We can ensure users submitting transactions provide the correct state by using Merkle trees. Merkle proofs are a way to commit some data. This way, proofs can be verified against the commitment to validate that the correct state has been passed in and that the user is not dishonest about the state provided. 

While secure, these proofs can be quite large. For instance, if a tree contains 100k accounts, the proof size would be 544 bytes. Providing proofs for multiple accounts could quickly surpass the transaction size limit of 1232 bytes. Luckily, we can circumvent this by using more efficient proof systems. Using constant proof size commitments such as KZG or Pedersen commitments would reduce the proof size, making it more feasible to include proofs within the transaction size limit.

ZK Compression is simply a mechanism to address the issue of Merkle proof size — it is a way to prove that some computation has been done correctly, without the associated costs of on-chain storage, by leveraging Solana’s ledger.

What is ZK Compression?

ZK Compression is a new primitive that allows developers to compress on-chain state to reduce state costs by orders of magnitude while preserving security, performance, and composability. For example, creating 100 token accounts would currently cost ~0.2 SOL, whereas, with ZK compression, the cost is reduced 5000x to ~0.00004. 

ZK Compression leverages zero-knowledge proofs to validate state transitions without exposing the underlying data. It does this by grouping multiple accounts into a single, verifiable Merkle root stored on-chain while the underlying data is stored on the ledger. Validity proofs are succinct zero-knowledge proofs used to prove the existence of n number of accounts as leaves within m number of state trees, all while maintaining a constant 128-byte proof size. These proofs are generated off-chain and verified on-chain, reducing the overall computational burden on Solana. ZK Compression uses Groth16, a renowned pairing-based zk-SNARK, for its prover system.

These accounts, however, are not regular Solana accounts. Instead, they are compressed accounts.

Compressed Account Model

ZK compressed state is stored in compressed accounts. These accounts are similar to regular Solana accounts but with several key differences that enhance efficiency and scalability:

  • Hash Identification — Each compressed account can be identified by its hash
  • Hash Changes on Write — Any write operation to a compressed account will change its hash
  • Optional Address — An address can optionally be set as a permanent unique ID of the compressed account. This is useful for certain use cases, such as NFTs. The reason this field is optional is to avoid computational overhead since compressed accounts can be referenced by their hash
  • Sparse State Trees — All compressed accounts are stored in Merkle trees with only the tree’s state root (i.e., the Merkle root) stored in the on-chain account space. More specifically, a state tree is a Poseidon hash-based concurrent Merkle tree
From the ZK Compression documentation on the Compressed Account Model

Compressed Program-Derived Addresses (PDAs) can be identified by their unique, persistent address. They follow a similar layout to regular PDA accounts with the Data, Lamports, Owner, and Address fields. However, unlike regular PDAs, the Data field enshrines the AccountData structure with Discriminator, Data, and DataHash fields.

Nodes

Different types of nodes play crucial roles in supporting ZK Compression. Anyone can run a Photon RPC node, Prover node, or a Light Forester node for connecting to Devnet and Mainnet-Beta. For local development, the ZK Compression CLI test-validator command starts a single-node Solana cluster with all the relevant nodes (i.e., Photon RPC and Prover), as well as system programs, accounts, and runtime features.

Photon RPC nodes index the compression programs. This enables clients to read and build transactions that interact with compressed state. The canonical compression indexer is named Photon, and is provided by Helius. This type of node can be run locally with minimal setup and requires it to be pointed to an existing RPC. 

Prover nodes are used to generate validity proofs for state inclusion. The getValidityProof endpoint of the ZK Compression RPC API specification can be used to fetch proofs. Prover nodes can be operated as a standalone node or bundled with another RPC. Note the canonical Photon RPC implementation includes a Prover node.

Light Forester nodes manage the creation, rollover, and updating of shared and program-owned state trees. This is meant for developers who may want to choose to have their own program-owned state trees serviced by a network of Light Forester nodes.

Trust Assumptions

Anyone can run one of the aforementioned nodes as well as store the raw data necessary to generate proofs and submit transactions. This introduces a trust assumption that impacts the liveliness of the compressed state. Namely, if the data is lost or delayed, transactions cannot be submitted unless the data is stored personally. Since only one honest node is needed to provide the data, and the proofs are self-verifiable, the issue lies in its liveliness and potential censorship rather than safety.

Moreover, the fact that the program that verifies compressed accounts is currently upgradeable introduces another trust assumption. This is so the program can be modified to fix any issues or be adapted to meet new requirements. However, it can be made immutable or frozen in the future once it reaches a stable and secure state.

Another liveliness trust assumption is the use of Forester nodes. These nodes maintain state root advancement and manage nullifier queues by emptying them and advancing state roots asynchronously. Here, account hashes are replaced with zeros to nullify them. This separation of advancement and nullification ensures instant finality of compressed state transitions while keeping transactions within Solana’s size constraints. Since nullifier queues have a constant size, Forester nodes are essential for protocol liveliness. A full queue would cause a liveliness failure for the associated state tree. Thankfully, Forester nodes prevent this by emptying the queues. However, people still need to run these nodes to help the protocol’s integrity and liveness. Without these nodes, ZK Compression would only be able to support roughly two thousand accounts/addresses.

Limitations 

Even when it’s not necessary to hide something, zero-knowledge proofs turn problems that require multiple computational steps into one that requires verifying a single proof to know the computations were performed correctly. And these computations do not need to be along the lines of whether a specific leaf belongs to a given tree — they can be any arbitrary computation. However, this comes at a cost.

Before using ZK compression, consider the following:

  • Larger Transaction Size — ZK Compression requires 128 bytes for the validity proof, and the for data to be read/wrote on-chain sent
  • Higher Compute Unit Usage — ZK Compression increases compute unit (CU) usage significantly as it requires ~100k CUs for validity proof verification, ~100k CUs for system use, and ~6k CUs per copmpressed account read or write
  • Per-Transaction State Cost — Each write operation incurs a small network cost, since it must nullify the previous compressed account state and append the new compressed state to the state tree. Thus, it’s entirely possible for a single compressed account’s lifetime cost to surpass its uncompressed equivalent if it requires numerous state updates

It may be preferable to use a regular account if:

  • The account is updated frequently
  • The lifetime number of the writes to the account will be large (i.e., >1000x)
  • The account stores a large amount of data that must be accessed in on-chain transactions

Benefits

ZK Compression is a scalable, secure, efficient, and flexible primitive that directly tackles Solana’s state growth problem and supports a wide range of applications and use cases. Arguably, its most noticeable advantage is its state cost reduction. ZK Compression allows apps to scale to millions of users easily by securely storing state on the cheaper ledger space while keeping the on-chain storage at a minimum with state fingerprinting. Using the example of minting 10,000 token accounts, assuming a SOL price of $130 USD, it would cost roughly $2,600 to do so. ZK Compression lowers this to less than fifty cents.

ZK Compression also plays nicely into current Solana specifications. For example, the structure of compressed accounts are almost identical to regular Solana accounts. It also supports Solana-specific innovations such as parallelism. That is, any two transactions under the same state tree (i.e., commitment) that access different compressed accounts can be executed in parallel. Moreover, ZK Compression bolsters synchronous atomic composability. For example, a transaction that lists n compressed accounts and m regular accounts is a completely valid configuration. An instruction referencing a compressed account can call another instruction or program referencing a regular account. This is true even if the accounts are compressed under different state trees. If one instruction fails, then the entire transaction is rolled back, and changes are visible from one instruction to the next. This differs from ZK rollups, where rollups cannot call each other synchronously or atomically unless they take locks. Naturally, this warrants a comparison between ZK Compression and rollups.

ZK Compression is Not a Rollup

Source: Vitalik’s reply on Warpcast

ZK Compression is not a rollup. While the two rely on the same technology, their implementations differ. There are two types of rollups:

  • Optimistic Rollups — All transactions are assumed to be valid for a given period of time, and fraud proofs are used to prove false transactions within this timeframe
  • Zero-Knowledge Rollups — Transactions are proved instantly to be valid or invalid using validity proofs

The entire state of a zero-knowledge rollup is represented as a single root on the base layer (i.e., Ethereum). This has led to multiple claims that ZK Compression is, in fact, a rollup. However, there are a number of crucial differences.

Consider a scenario involving 500 transactions on a ZK rollup. In this case, the entire rollup is treated as a single circuit. All 500 transactions are verified together, resulting in a single proof that confirms the state root has changed from A to B. After this proof is verified, the smart contract managing interactions between the L1 and L2 updates the state root accordingly. In contrast, with ZK Compression, each of the 500 transactions generates its own proof to verify the correctness of the account data. These transactions are executed by the SVM itself, and the accounts are treated as “regular” accounts once each proof is validated.

If we were to classify ZK Compression as a rollup, it would imply that any Merkle root stored on Solana could be considered a validity-based rollup. If we look at all the compressed cNFT roots currently on Solana, then there are roughly 4-5k validity-based rollups, depending on whether we count Merkle trees with zero mints.

Thus, it becomes clear that ZK Compression is a unique solution tailored to Solana’s architecture. It’s a novel primitive, different from ZK rollups, that enhances scalability and efficiency without the complexity and separation of rollups.

The Future of ZK on Solana and Interoperability

Source: https://x.com/aeyakovenko/status/1739485437545296183

The Current State of ZK on Solana

One of my first writing tasks at Helius was to cover Solana’s v1.16 update. I was extremely excited to learn about the better runtime support for zero-knowledge proofs and covered it within the article. However, these improvements were delayed. I made the mistake of covering these improvements again, in greater detail, in the v1.17 update article as these improvements were delayed again. I didn’t even bother covering them in the v1.18 update article. Naturally, I was disappointed, with others expressing frustration

Despite this sentiment, there is a budding, albeit small, community of ZK developers on Solana. Initially, Light Protocol was focused on private program execution in the form of PSPs (Private Solana Programs) before their refined focus on ZK Compression. Dark protocol is also a futarchic privacy protocol built on Solana. It lacks a central team with contributions made via proposals. Arcium, formerly known as Elusiv, is using Multiparty computation eXecution Environments (MXEs) to power its own parallelized, confidential computing network. Bonsol is a zero-knowledge "co-processor" that allows developers to run any risc0 image and verify it on Solana (i.e., verifiable, off-chain compute). Tutorials and lists of zero-knowledge proof-related links have also been circulated.

Most notably, the ZK Token Proof Program verifies several zero-knowledge proofs tailored to work with Pedersen commitments and Twisted ElGamal Encryption over curve25519. It empowers Confidential Transfers, which use zero-knowledge proofs to encrypt the balances and transaction amounts of SPL tokens. The goal being confidentiality rather than anonymity. Homomorphic encryption allows computations to be performed on the encrypted data without needing to decrypt it. To do this, Confidential Transfers use Twisted ElGamal Encryption for the hidden mathematical operations on ciphertext and Sigma Protocols to validate these transfers without revealing sensitive information. Only the account holder with the decryption key can view their encrypted balance. However, the Global Auditor System allows selective read access for compliance and auditing via separate decryption keys. 

The ZK Token Proof Program is currently a blocked feature due to the passing of SIMD-0153: ZK ElGamal Proof Program. This new SIMD aims to deprecate the existing ZK Token Proof program that is specifically designed for the SPL Token program and replace it with a more general zero-knowledge proof program independent of any specific application. The SIMD has been merged with support from both Anza and the Firedancer team.

However, the tides are starting to turn. Solana is shaping up to be a ZK juggernaut as these improvements are finally coming to Devnet and Mainnet-Beta. There are now three ZK syscalls live on Solana.

Poseidon Syscalls

Poseidon is a family of hash functions designed specifically for zero-knowledge proofs and is used in projects like Zcash, Mina, and Light Protocol. Poseidon is more computationally efficient for zero-knowledge proofs than traditional, generalized hashing functions like SHA-256

Poseidon hash functions are zero-knowledge friendly because they:

  • Perform arithmetic operations efficiently
  • Require fewer steps to generate proofs (i.e., have a lower circuit complexity), due to their arithmetic-friendly design, optimized S-box, and low number of rounds
  • Use algorithms that handle sequences of bits of any length, making them extremely versatile

It used to be too expensive to compute Poseidon hashes in one transaction. However, this changes with epoch 644 and the activation of the Poseidon syscall (i.e., a system call that takes a 2D-byte slice input and calculates the corresponding Poseidon hash as its output). This is exciting as ZK Compression relies on Poseidon hashing for its state trees.

The Poseidon syscall calculates hashes using the BN254 curve with the following parameters:

  • S-boxes x5 substitution boxes
  • Inputs — 1 ≤ n ≤ 12
  • Width — 2 ≤ t ≤ 13
  • Rounds — 8 full rounds and partial rounds depending on t: [56, 57, 56, 60, 60, 63, 64, 63, 60, 66, 60, 65]

Its output will be the Posiedon hash result encoded as 32 bytes in the specified endianness.

Note the specific variant used here for the syscall is Poseidon with a x5 S-box and parameters tailored for the BN254 curve. The light-poseidon crate will facilitate computing these hashes. The crate itself is audited and compatible with Circom.

alt_bn128 Syscalls

alt_bn128 refers to the implementation of the Barreto-Naehrig (BN-128) elliptic curve, a pairing-friendly curve that enables efficient zk-SNARKs proofs and computations. This curve is instrumental for various zero-knowledge proof systems, including Groth16, which is what ZK Compression relies on for validating state transitions. This syscall significantly reduces the space required per proof, offering a crucial space and time optimization for efficient on-chain proofs. 

The sol_alt_bn128_group_op syscalls compute operations on the alt_bn128 curve, including point addition in G1 (by G1 we simply mean a group of points on a given elliptic curve), scalar multiplication in G1, and pairing:

  • Inputs — Serialized points and scalars in big endian format
  • Operations — Point addition in G1, scalar multiplication in G1, pairing (1 point in G1 and 1 point in G2)
  • Outputs — Points in G1 or pairing results serialized as a 256-bit integer

The sol_alt_bn128_compression syscalls compress or decompress points in G1 or G2 groups over the alt_bn128 curve, and returns the points in standard big endian format.

These syscalls are currently live on testnet and will seemingly become available on Devnet once the bugs in the loaded programs’ recompile phase are resolved, as per SIMD-0075: Secp256r1 Precompile. This proposal aims to streamline error codes for alt_bn128 syscalls, as well as the Poseidon syscall, ensuring consistency and reducing the risk of consensus failures due to different error codes returned by validators.

The alt_bn128 syscalls can be used for regular vector commitments of constant proof size, such as KZG commitments. Thus, they’ll work with any pairing-friendly curve and will not require a ZK-prover circuit

Interoperability

Solana is a ZK chain. It is a highly performant Layer 1 blockchain with low fees and runtime support for elliptic curve operations. The implementation and support for zero-knowledge proof-related syscalls cultivate innovation, allowing novel primitives and applications, such as ZK Compression, to be built on top of Solana.

The introduction of alt_bn128 syscalls narrows the compostability gap between Solana and Solidity-based contracts that rely on precompiled contracts for elliptic curve operations specified in EIP-196, EIP-197, and EIP-198. These operations facilitate zk-SNARK proof verification within Ethereum’s gas limits. Thus, Solidity contracts relying on these elliptic curve operations may now find it easier to transition to, or even interoperate with, Solana.

SIMD-0075 is vital to interoperability solutions. Once fully implemented, this SIMD will enable projects, such as the upcoming blobsream-solana (which streams DA from Celestia to Solana), will be able to use off-chain proof generation and on-chain verification for storing Merkle commitments. Without these syscalls, verifying the Groth16 proofs on Solana would not be possible. Moreover, these syscalls enhance trust-minimized bridging and interoperability, allowing for other blockchains to seamlessly and securely interact with Solana.  

Toly’s right — with all of these enhancements to Solana’s runtime, Solana is an Ethereum L2. Soon, there will be nothing stopping you from submitting all of Solana’s blocks into some data-validating bridge contract on Ethereum. Inversely, there will be nothing stopping you from submitting all of Ethereum’s blocks into some data-validating bridge program on Solana. Bidirectional interoperability, accelerated by zero-knowledge proofs instead of antiquated bridges, is a bright and budding future for Solana.

Conclusion

Zero-knowledge proofs are, undoubtedly, one of, if not the most, powerful primitives developed by cryptographers. By examining the theory, math, and cryptography that underlies this concept in the first article, working up from first principles, this becomes self-evident. The potential applications are endless, from true fog of war for on-chain games to proving a set of transactions on an L2 resulted in a specific state transition.

This two-part series easily could’ve been another fifty-plus pages, covering the intricacies of homomorphic encryption, coding circuits in Circom, and analyzing different commitment schemes. However, the goal of these articles is to educate the reader on the fundamentals of zero-knowledge proofs, so they may take this newfound knowledge and, now, apply it to Solana. 

Solana is shaping up to be a ZK-juggernaut. From the release of ZK compression to the various syscalls going live in the near future, its importance cannot be understated. While primitives like ZK Compression abstract away all the complexities of zero-knowledge proofs for the average developer, an understanding of the fundamentals is invaluable to further the conversation and its development on Solana. 

If you’ve read this far, thank you, anon! Be sure to enter your email address below so you’ll never miss an update about what’s new on Solana. Ready to dive deeper? Explore the latest articles on the Helius blog and continue your Solana journey, today.

Additional Resources