In the conventional sequence-to-sequence (seq2seq) model for abstractive summarization, the internal transformation structure of recurrent neural networks (RNNs) is completely determined. Therefore, the learned semantic information is far from enough to represent all semantic details and context dependencies, resulting in a redundant summary and poor consistency. In this paper, we propose a variational neural decoder text summarization model (VND). The model introduces a series of implicit variables by combining variational RNN and variational autoencoder, which is used to capture complex semantic representation at each step of decoding. It includes a standard RNN layer and a variational RNN layer . These two network layers respectively generate a deterministic hidden state and a random hidden state. We use these two RNN layers to establish the dependence between implicit variables between adjacent time steps. In this way, the model structure can better capture the complex semantics and the strong dependence between the adjacent time steps when outputting the summary, thereby improving the performance of generating the summary. The experimental results show that, on the text summary LCSTS and English Gigaword dataset, our model has a significant improvement over the baseline model.