Fig. 1. Partitioning communication and computation for a single neuron and its inputs. *A*, The presynaptic axonal inputs to the postsynaptic neuron is a multivariate binary vector, X = [X_{1}, X_{2}, …, X_{n}]. Each input, X_{i}, is subject to quantal failures, the result of which is denoted by φ (X_{i}), another binary vector that is then scaled by quantal amplitude, Q_{i}. Thus, each input provides excitation φ(*X*_{i}
)*Q*_{i}
. The dendrosomatic summation, ∑_{i}φ(*X*_{i}
)*Q*_{i}
is the endpoint of the computational process, and this sum is the input to the spike generator. Without specifying any particular subcellular locale, we absorb generic nonlinearities that precede the spike generator into the spike generator, *g* (∑_{i}φ(*X*_{i}
)*Q*_{i}
). The spike generator output is a binary variable, *Z*, which is faithfully transmitted down the axon as *Z*′. This*Z*′ is just another *X*_{i}
elsewhere in the network. In neocortex, experimental evidence indicates that axonal conduction is, essentially, information lossless, as a result*I*(*Z; Z*′) ≈ *H*(*Z*). The information transmitted through synapses and dendrosomatic summation is measured by the mutual information I(X; ∑ φ(X_{i})Q_{i}) = H(X) − H(X‖∑_{i}φ(X_{i})Q_{i}). Given the assumptions in the text combined with one of Shannon's source-channel theorems implies that, H(X) − H(X‖∑_{i}φ(X_{i})Q_{i}) = H(p*), where H(p*) is the energy-efficient maximum value of H(Z). *B*, The model of failure prone synaptic transmission. An input value of 0, i.e., no spike, always yields an output value of 0, i.e., no transmitter release. An input value of 1, an axonal spike, produces an output value of 1, transmitter release, with probability success s = 1 − f. A failure occurs when an input value of 1 produces an output value of 0. The probability of failure is denoted by f.