Federated Learning (FL) has gained attention for its promising privacy protection. In FL, clients train local gradients on their data without sharing raw data to update the global model. However, security issues persist. Attackers can infer original data from local gradients, compromising privacy, while a malicious cloud server may tamper with uploaded parameters, leading to incorrect aggregation. Considering this, we focus on the above issues in FL: (1) privacy protection of the parameters uploaded by clients and (2) verification of the correctness of the aggregated result from a cloud server. In response to these issues, this article proposes VSAF, a verifiable and secure aggregation scheme for federated learning in edge computing. Using a linear homomorphic hash function, we design a lightweight verification algorithm for aggregated gradients. To protect gradient privacy, we combine the Bloom filter and Shamir's secret sharing to design a single masking protocol. Detailed analyses and experiments demonstrate the security and efficiency of the proposed scheme.