Privacy Pools & New Attacks

A new protocol called "Privacy Pools" was recently introduced alongside a technical paper laying out how the system functions and how the authors envision extending it. The protocol is a generalization of the design of Tornado Cash and the authors envision an ecosystem of related services growing up around this tool to enable honest users to still achieve on-chain privacy.

But this approach also introduces new attack vectors. Tornado Cash, as a single concentrated point across the entire ecosystem, made black-and-white designations easy. It was well-known that actors generally considered "bad" by most of the geopolitical community (i.e North Korean government hackers working to fund their weapons programs) used Tornado Cash. As it was approximately a unified service that meant anyone else using it interacted with the bad actors. And then their funds were, in the language of diplomacy, persona non grata around the world. Exchanges, even low/no-KYC exchanges, would not touch the funds.

In the privacy pools approach things are more complicated. So let us work through how this thing works and then lay out some interesting new attack vectors that are available to "the bad guys."

Mixers

Mixers work by giving users deposit receipts that can be encashed without revealing the full details of the receipt. The exact mechanism does not matter here – conceptually this is just like being asked for the last few digits of a phone or credit card number. Or even an old address. The withdrawal process has a way to ensure you have the right to enter into whatever transaction you are proposing.

In the case of Tornado Cash everyone in the world used the same small number of mixers. This is important because a mixer with one user provides no anonymity. Sure we cannot mathematically prove the lone depositor also made the withdrawal. But who else would have the secret information?

If there are a thousand users anonymity is better than if there are only a hundred (or ten). And so everyone, good actors and bad, all had an incentive to use the same service.

And one problem immediately jumps out: if you know bad actors will use the protocol and that will lead to other users having trouble there is a sort of natural limit on how many people will ever use this thing. It will only be people who like to "live dangerously."

Privacy Pools

This new tool seeks to fix that scaling problem, and the general bad vibes around mixers, by adding a small new twist to the concept. In a regular mixer you know each withdrawal corresponds to some deposit - but you do not know which one. That is sort of the point. But it has the side effect of making every withdrawal look like is might have come from a bad actor. After all if it is 100% anonymous and there is at least one bad deposit then any withdrawal could be that one. This is why funds out of mixers are not accepted by exchanges and the like.

With privacy pools when you withdraw you provide a list of deposits your funds might have come from. The protocol will reject the request if that list is missing your actual deposit. But if it succeeds you now have proof your funds came from one of the places you chose. If you chose only clean addresses then all is fine. There is no reason for anybody to block those funds. If your list was long enough you are functionally anonymous.

Of course for this to work there need to be large publicly available lists of clean addresses folks can use. And a large number of users need to decide it is worth encouraging the project.

But there are a few problems. Contrary to some of the coverage out there this idea is not really that new. And it fails to resolve the real open issues.

Extortion Attacks

For a moment assume this system is in widespread use and we are a bad actor. What can we do? A whole brand new sort of attack on the system exists for us now. Try the following:

  1. Set up a mining operation and send clean coins to a large number of addresses.
  2. Use those addresses in defi. Make them look clean and good.
  3. Get your addresses on lists of "clean and safe depositors" for privacy pools.

You can probably see where this is going. Once you control addresses that are widely used you can threaten people. All you need to do to wreak havoc is publish a single message "signed" by both a known-bad and known-good address.

You can threaten individual exchanges or users. You can threaten foundations and projects. You can threaten the privacy pools operators.

This particular brand of extortion attack, when implemented by hackers rather than mafiosi, is often known as "ransomware." You can effectively ransom the whole system.

Maybe mining is not the right start point. It does not really matter. You can run some funds through a semi-respectable low-KYC exchange and start from there. Or you can trade OTC with an acceptable actor, just once, and ask for the funds to be withdrawn from an exchange account. It does not really matter. And on a low-fee network like Tron or BSC it will be easy to generate large sets of seemingly-clean wallets.

Yes the system could, in theory, allow users to go back and re-prove they are associated with a subset of their original choice should a chosen address later prove bad. The attack here is obvious: reveal one bad address at a time and force people to pay endless gas.

Relayers

The second, and larger, problem is that this approach does not solve the "fee payment dilemma." Withdrawing from the protocol requires a gas fee. Where does that money come from? You cannot simply send a small transfer from the source wallet to the destination – that links them in a way that renders the mixing pointless.

The "solution" is relayers. And how do relayers work? They are centralized off-chain services that, for a fee, will pay the withdrawal gas and send funds to your designated end address. The docs are here. Tornado Cash operated a sort of "decentralized relayer network" but that was an abuse of language. The registry of relayers was maintained on-chain in an arguably decentralized fashion. But the relayers themselves were still centralized off-chain services.

Their own documentation even has a section entitled "Warning: Understand & Accept Potential Risks." Privacy pools does not stop this problem. The paper says:

All users with “good” assets have strong incentives and the ability to prove their membership in a “good”-only association set. Bad actors, on the other hand, will not be able to provide that proof.

That is true. And nobody is going to include a known-North-Korean address in their withdrawal request. But that does not matter. North Korea can still render the relayers criminals by using them. And they can do so in an especially insidious way: they can request a withdrawal naming only bad addresses.

Relayers Already Under Attack

Relayers were a particular focus of the criminal charges against the Tornado Cash founders recently. This design still requires relayers and makes it even easier for the bad guys to either attack or extort them. What's to stop hackers from depositing into the protocol and then extorting every relayer?

And if nobody wants to be a relayer this does not work at all. Maybe you want to add relayer whitelists? Ok. If the government requires you to KYC those lists this all serves no point. If not then we are back to the ransomware discussed above.

Yes these are important problems. But this is plainly not the solution.

Ideas That Might Actually Work

So privacy pools does not really solve anything. Can we get anywhere? The first observation to make is that eliminating fees is a non-starter. Without fees the network is subject to "sybil attack" as is well known.

How about something more like z-cash or Monero, but with this "pick your subset" feature added? The problem is the subset needs to be specified on the withdraw side only or it is too easy to deanonymize transfers. This means fees cannot be paid only by the sender and we are stuck with a gas problem.

You could imagine a system that, borrowing a bit from Tron, grants some small allocation of free "mixer withdraw requests" each day to every address. But this would need to include 0-balance addresses to fix the problem and that, again, is subject to sybil attack.

The only sort of half-working scheme that comes to mind is as follows:

  1. Every day at noon the system generates 10 random subsets of addresses. Senders can, in a manner resembling a z-cash++, specify collection of those subsets as the source of funds so long as it contains the sending address.
  2. Starting at midnight, for 24 hours, everyone in the system has access to the same sets.

Run that in a loop and you might, sort of, get there. There are at least two obvious problems here:

  1. Maybe you do not like any of the 10 sets which includes your source address. Then you need to wait.
  2. Maybe there are not enough transfers per day from each subset to provide real anonymity.

Shortening the time fixes the first problem but makes the second problem worse. Increasing the number of random sets fixes the first but makes the second worse. Switching from a fixed time period to a fixed number of transactions partially resolves the second problem. But now, when considering if you can use any of the next-period random sets, you do not even know how long the wait is to try again. Plus, of course, all the transfers can only settle after the fixed number is reached or deanonymizing people is again easy.

Technical note: Ethereum itself has an open proposal for a partial solution to this problem: EIP-4337. This would allow the construction of gas-free withdrawals by introducing a different sort of mempool and intermediaries called "bundlers." Bundlers are relayers in that they manage assets off-chain, ferrying them from place to place. That is just going to get the entire network prosecuted.

This relayer bit is the real problem and has been understood since the early days of mixers. And a clever solution to that trade-off might move things forward. But privacy pools is not that.