Article

Multi Utility E-Controlled cum Voice Operated Farm Vehicle

International Journal of Computer Applications 01/2010;
Source: DOAJ

ABSTRACT This paper describes the design and construction of MUEVOFV.The vehicle will be used to explore ways of increasing theproductivity using expensive agricultural mobile machinery bytaking over some of the tasks of the operator, allowing him tocontrol the machinery from remote place i.e. E-control throughvoice commands; and to control several accessories of themachine simultaneously. The system was designed to satisfy theneeds of various farm operations in unknown agricultural fields.The controller has a layered architecture and supports two degreesof cooperation using sensor modules between the operator androbotic vehicle, direct and supervisory control. The vehicle'sposition and heading direction can be controlled by globalpositioning.

0 Bookmarks
 · 
95 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: One of the most important processes in speech processing is gender classification. Generally gender classification is done by considering pitch as feature. In general the pitch value of female is higher than the male. In some cases, pitch value of male is higher and female is low, in that cases this classification will not obtain the exact result. By considering this drawback here proposed a gender classification method which considers three features and uses fuzzy logic and neural network to identify the given speech signal belongs to which gender. For training fuzzy logic and neural network, training dataset is generated by considering the above three features. After completion of training, a speech signal is given as input, fuzzy and neural network gives an output, for that output mean value is taken and this value gives the speech signal belongs to which gender. The result shows the performance of our method in gender classification. KeywordsGender classification–Fuzzy logic–Neural network–Energy entropy–Short time energy–Zero crossing rate
    International Journal of Speech Technology 01/2011; 14(4):377-391.