
Hassan-Roland Nasser- Professor
- Researcher at Agroscope
Hassan-Roland Nasser
- Professor
- Researcher at Agroscope
About
22
Publications
35,212
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
136
Citations
Introduction
Current institution
Additional affiliations
March 2010 - March 2014
Publications
Publications (22)
This study evaluates the efficiency of an automated irrigation system using dendrometer sensors in apple orchards and compares it to a standard grower commercial irrigation approach based on soil moisture sensors. An algorithm was developed to balance daily stem shrinkage (water loss) and expansion (water uptake), aiming for a stable dendrometer si...
The use of dendrometers to measure the stem diameter (SD) of trees provides information about their actual water stress levels. The Scholander chamber is currently the gold standard for measuring stem water potential and thus for quantifying the water status of trees, despite being a laborious method, especially for apple trees. The aim of this stu...
Cow-calf contact systems are attracting increasing interest among farmers and some are already being implemented into dairy farms. However, a comprehensive assessment of animal welfare in these systems is lacking. One reason for this is the large amount of time required for behavioral observations. However, the increased use of sensors in herd mana...
Time series analysis can facilitate the detection of complex behavioral patterns and potentially provide new opportunities to assess animal welfare. The aim was to investigate whether dairy cows exhibit daily, individual patterns in activity and in area use in the barn. We predicted that behavioral patterns will be more consistent (1) within than b...
Poor quality of the images captured by a conventional vein viewer or finder renders it to be un-useful as a diagnostic tool. This is a result of an insufficient amount of venous blood in the superficial veins and the low resolution of the near-infrared (NIR) cameras used in the traditional vein viewers. In this paper, we propose using a high resolu...
The retina encodes visual scenes by trains of action potentials that are sent to the brain via the optic nerve. In this paper, we describe a new free access user-end software allowing to better understand this coding. It is called PRANAS (https://pranas.inria.fr), standing for Platform for Retinal ANalysis And Simulation. PRANAS targets neuroscient...
Communication is one of the major principles to deal with the surrounding environment. The communication between people who are deaf or having an impaired hearing, with everyone around is difficult. This is due to the lack of the common language between both parties. Statistics and studies done by the World Health Organization showcases that 360 mil...
We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows one to properly handle me...
Recent experimental advances have made it possible to record up to several hundreds of neurons simultaneously in the cortex or in the retina. Analyzing such data requires mathematical and numerical methods to describe the spatio-temporal correlations in population activity. This can be done thanks to Maximum Entropy method. Here, a crucial paramete...
With the advent of new Multi-Electrode Arrays techniques (MEA), the simultaneous recording of the activity up to hundreds of neurons over a dense configuration supplies today a critical database to unravel the role of specific neural assemblies. Thus, the analysis of spike trains obtained from in vivo or in vitro experimental data requires suitable...
doi:10.1186/1471-2202-14-S1-P57
In this talk we shall argue that Gibbs distributions, considered in more general setting than the initial concept coming from statistical physics and thermodynamics, are canonical models for spike train statistics analysis. This statement is based on the three following facts, developed in the talk. 1. The so-called Maximum Entropy Principle allows...
Understanding the dynamics of neural networks is a major challenge in
experimental neuroscience. For that purpose, a modelling of the recorded
activity that reproduces the main statistics of the data is required. In a
first part, we present a review on recent results dealing with spike train
statistics analysis using maximum entropy models (MaxEnt)...
Recent experimental advances have made it possible to record several hundred neurons simultaneously in the retina as well as in the cortex. Analyzing such a huge amount of data requires to elaborate statistical, mathematical and numer- ical methods, to describe both the spatio-temporal structure of the population activity and its relevance to senso...
Recent experimental advances have made possible to record several hundred neurons simultaneously in the retina as well as in the cortex. Analyzing such a huge amount of data requires elaborate statistical mathematical and numerical methods, to describe the spatio-temporal structure of the population activity, and its relevance to sensory coding. Am...
We briefly review and highlight the consequences of rigorous and exact results obtained in \cite{cessac:10}, characterizing the statistics of spike trains in a network of leaky Integrate-and-Fire neurons, where time is discrete and where neurons are subject to noise, without restriction on the synaptic weights connectivity. The main result is that...
We review here the basics of the formalism of Gibbs distributions and its numerical implementation, (its details published elsewhere \cite{vasquez-cessac-etal:10}, in order to characterizing the statistics of multi-unit spike trains. We present this here with the aim to analyze and modeling synthetic data, especially bio-inspired simulated data e.g...
The modelling of the visual system needs to provide realistic retinal responses (spike trains) to visual stimuli. Our team [1] developed a retina simulator that allows a large scale simulation (about 100.000 cells) and provides spikes train output as a response to visual stimuli. This model contains several blocks representing the retina layers. It...
Questions
Questions (15)
We are developing a project based on LIPUS and we need to buy a Pizo Crystal with the following properties:
Circular shape
diameter 6mm
frequency 1.6 Mhz
Anyone can help telling us about a website from which we can buy such crystals?
Thanks in advance.
In the literature, it is not clear what is the difference between those terms and there seem to be a very big misuse of them.
Online resources and organizational reports seems to use them interchangeably. In the same time, research papers suggests several definitions for each of them, which are sometimes very "un-related'.
Could anyone help to clarify the difference between them?
Say we have a set of alpha_i (1<i<8) and and real number Z that belongs to R, where Z is constant and predefined (say Z = 4).
Finding all possible combination of alpha_i would be an NP problem. However, the task could be possible if we discretize. If we discretize at 0.01, then we would have 400^8 possible combination. It is possible to scan all of them and then test, but computationally, this would be very costly.
Is there an efficient algorithm to do this task?
Can we generalize this relationship on other multidimensional distributions?
After my Phd, I started a private company with 2 colleagues in software development. I am looking to participate in a research project that can lead to developing an innovative software that can add values to the market place. We would like also that the software create job vacancies together with the market value.
We can participate at many levels:
- Studying the economical impact of the idea/software.
- Developing the code, graphical user interface.
- Design and software ergonomics.
How could such scenario happen?
I have a set of underground parkings and I am searching for a way to make a navigator tool that allows to create the 3D shape of the parkings and then create a tool with the 2D (?) or 3D (?) data in order to navigate virtually.
A challenging issue in today's visual neuroscience community is to understand the activity of a network of Retinal Ganglion Cells (RGC) jointly with a stimulus. In this spot, we have several mathematical and computational tools which are based on several frameworks (empirical statistics, encoding and decoding, probabilistic modeling).
In the spirit of our ultimate goal (understanding the response/stimulus), we are faced with two problematic questions which are really important (and very correlated) to set a clearer vision and to develop the suitable tools:
1- What framework (and mathematical tools) should we use? Do we build a framework based on all relevant literature works? Are there any guidelines to decide what are the suitable tools?
2- How can we bring a common tool to both biologists (neurobiologists, experimentalists) and programmers (computational neuroscientists, computer scientists)? Clearly, the best tool for a biologist is a graphical user interface (but they will be limited to available tools) and for programmers is an IDE or programming language (but their works will be hardly accessible by biologists). So, is there a way to build a tool that they both use and be flexible with a common vision and two different working textures?
I am looking for people who work in labs for neuroscience (mostly computational neuroscience and probabilistic modeling) and in the same time have relationship at the industrial level, i.e., they contributed in a spin-off or any of the technology transfer actions.
I have a Maxent probability distribution that I want to fit to empirical data. I implemented the algorithm described in this paper (http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.89.6189). The author claims and proves mathematically the convergence.
I have a very strange behavior DURING THE PARALLEL UPDATE (when all the parameters are updated at one): After a very nice convergence, where the error decreases during few (5-10) iterations (some times 1-2), the error increases dramatically.
I had several the following explanation in mind:
Convergence oscillations. When the problem arrives at the bottom, it begins to oscillate around the convergence point since the problem is relaxed and we permit some error on the estimated features.
However, The amplitude of the oscillations are very big!!! And hence, I think that it could not come from the oscillations.
Has anyone experienced a similar problem? Any interpretation?
Thanks you very much.
Please consider also to indicate the application. Is it for retinal spike trains or cortex acquisitons ... ?
I am trying to fit spatio-temporal maximum entropy parameters (lagrange multipliers) to real spike train data. I would like to know if anyone can recommend a convergence criterion other than the one I'm currently using.
The averages of features with respect to empirical distribution are: <emp_i> and those of the target distribution are <tar_i>. Each time I update the target distribution, I compute the error with the following assumption:
error^2 = max ( (<empi_i> - <tar_i>)/ SigmaMax).
Where SigmaMax is the maximum mean squared deviation measured on the empirical distribution and it is equal to: (<emp_i>(1-<emp_i>))/DistributionLength.
The problem with the criteria is that it's not really reliable because of the sparsity of some features which sometimes make division near zero. The absolute difference of error is also not reliable.
Any suggestions?