In a recent paper [1] G. P. Barker gave a proof of the following Theorem. -- If $S_1,S_2,\ldots,S_n$ are (upper) triangular matrices (over a ring) such that the $(i,\;i)$ element of $S_i$ is zero, then \[ S_1S_2... S_n=0. \] We give here two simple proofs and a generalization of that theorem. 1. -- Proof (by induction on $n$). For $n=1$ there is nothing to be proved as $S_1=0$. For $n=2$ we have \[ S_1=\begin{bmatrix} o & x o & x \end{bmatrix} \qquad S_2=\begin{bmatrix} x & x o & o \end{bmatrix} \] and by straightforward verification $S_1S_2=0$. (Remark. -- We note by $x$ an arbitrary element which may also be or not be equal to zero. We also note by the same letter $S$ different matrices because there cannot be any ambiguity about the meaning of the letter. We see that even in the case $n=2$ the order of matrices in $S_1S_2$ cannot be reversed. Here $S_2S_1$ could be different from zero). Assume now that the theorem is proved for $n=m$. Then \[ S_1S_2... S_mS_{m+l}=TS_{m+1}=0 \] because of \[ T=S_1S_2... S_m= \begin{bmatrix} O_m & u_m o_m & x \end{bmatrix}, \quad S_{m+}= \begin{bmatrix} A_m & t_m o_m & o \end{bmatrix} \] where: $O_m$ is zero matrix of the order $mxm$ (bu the induction hypothesis), $A_m$ is an upper triangular matrix of the order $mxm$ (by the definition), $u_m$ and $t_m$ are some column $m$-dimensional vectors and $o_m$ is a row $m$-dimensional vector. The proof is also valid in the partitioned form of the theorem. 2. -- Proof (by direct verification). We use the following well-known facts from linear algebra: The upper triangularity of matrices is invariant under the multiplication of matrices. The product $AВ$ of two matrices $A,В$ can be obtained from $A$ (resp. from $B$) by just making the same changes on the columns of $A$ (on the rows of $B$) as one has to do in order to obtain $В$ (resp $A$) by making the necessary changes on columns (resp. rows) of the identity matrix $E$. Now, we start to calculate $S_1S_2$. As $S_2$ is obtained from $E$ by a) multiplying the first column by $x$; b) multiplying the second column by zero; c) adding the first column multiplied by $x$ to the second column; d) some changes on the rest of columns which we ignore as unimportant to our goal. Due to these changes $S_1S_2$ has two first columns equal to zero (the operations $a$ and $c$ having no effect at all, the first column of $S_1$ being entirely composed of zeros). In the same manner we conclude that $S_1S_2S_3$ has its first three columns equal to zero and so on. Finally, $S_1S_2\ldots S_n$ has all elements equal to zero. This proof is also valid in the partitioned form of the theorem. We have now in a more precise form. Theorem 2. -- Let the matrices $S_i$, $i=1,2,\ldots,n$ be as in theorem 1 and $S_i$ arbitrary for $i>n$. Then the matrix \[ S_1S_2... S_k \] has $n\dotminus1(n\dotminus k)$ first columns composed entirely of zeros. For $k=n$ we have theorem 1.