Classifying human gestures using surface electromyografic sensors (sEMG) is a challenging task. Wearable sensors have proven to be extremely useful in this context, but their performance is limited by several factors (signal noise, computing resources, battery consumption, etc.). In particular, computing resources impose a limitation in many application scenarios, in which lightweight classification approaches are desirable. Recent research has shown that machine learning techniques are useful for human gesture classification once their salient features have been determined. This paper presents a novel approach for human gesture classification in which two different strategies are combined: a) a technique based on autoencoders is used to perform feature extraction; b) two alternative machine learning algorithms (namely J48 and K*) are then used for the classification stage. Empirical results are provided, showing that for limited computing power platforms our approach outperforms other alternative methodologies.