Analogical proportions are statements of the form ‘a is to b as c is to d’, formally denoted Open image in new window . This means that the way a and b (resp. b and a) differ is the same as c and d (resp. d and c) differ, as revealed by their logical modeling. The postulates supposed to govern such proportions entail that when Open image in new window holds, then seven permutations of a, b, c, d still constitute valid analogies. It can also be derived that Open image in new window does not hold except if a=b. From a machine learning perspective, this provides guidelines to build training sets of positive and negative examples. We then suggest improved methods to classify word-analogies and also to solve analogical equations. Viewing words as vectors in a multi-dimensional space, we depart from the traditional parallelogram view of analogy to adopt a purely machine-learning approach. In some sense, we learn a functional definition of analogical proportions without assuming any pre-existing formulas. We mainly use the logical properties of proportions to define our training sets and to design proper neural networks, approximating the hidden relations. Using a GloVe embedding, the results we get show high accuracy and improve state of the art on words analogy-solving problems.