Posts

Sorted by New

Wiki Contributions

Comments

jacobcd52-10

Nice work! I'm not sure I fully understand what the "gated-ness" is adding, i.e. what the role the Heaviside step function is playing. What would happen if we did away with it? Namely, consider this setup:

Let  and   be the encoder and decoder functions, as in your paper, and let  be the model activation that is fed into the SAE.

The usual SAE reconstruction is , which suffers from the shrinkage problem.

Now, introduce a new learned parameter , and define an "expanded" reconstruction , where  denotes elementwise multiplication.

Finally, take the loss to be:

.

where  ensures the decoder gets no gradients from the first term. As I understand it, this is exactly the loss appearing in your paper. The only difference in the setup is the lack of the Heaviside step function.

Did you try this setup? Or does it fail for an obvious reason I missed?

The typical noise on feature  caused by 1 unit of activation from feature , for any pair of features , is (derived from Johnson–Lindenstrauss lemma)

   [1]

  

1. ... This is a worst case scenario. I have not calculated the typical case, but I expect it to be somewhat less, but still same order of magnitude

Perhaps I'm misunderstanding your claim here, but the "typical" (i.e. RMS) inner product between two independently random unit vectors in  is . So I think the  shouldn't be there, and the rest of your estimates are incorrect.

This means that we can have at most   simultaneously active features

This conclusion gets changed to .