Federated learning is a collaborative machine learning approach where multiple clients train a global model without sharing raw data. Federated learning has high application value in the fields of IoT, healthcare, and others due to its decentralized data processing and privacy protection features. Despite its advantages, the classic federated learning algorithm, Federated Averaging (FedAvg), faces some limitations that affect its optimization speed and compromise system security. This paper introduces FedCCSM, a federated learning framework designed to address class imbalance and malicious client behavior. Firstly, to accelerate model optimization, a client selection mechanism is introduced based on specific criteria, ensuring a high-quality data or powerful computational clients participate in the aggregation process. This speeds up optimization and improving overall efficiency. Secondly, the adoption of a committee mechanism involves selecting a client committee to screen the model before aggregation, enhancing system security. This committee serves as a precautionary measure to prevent malicious clients from conducting adversarial attacks by intentionally providing inaccurate updates or compromising the integrity of the global model integrity. By doing so, the security and reliability of the global model are ensured throughout the collaborative learning process. Thirdly, by simulating mechanisms for unbalanced clients, the algorithm's practical application effectiveness is strengthen. Experiments on MNIST and CIFAR-10 datasets demonstrate that FedCCSM improves accuracy on imbalanced datasets by 3% compared to FedAvg and reduces the influence of malicious clients by 5%. These results highlight the potential of FedCCSM in enhancing federated learning robustness and fairness in security-sensitive applications.