Questions related to Multimodality
Most polarization maintaining fibres available are single mode fibres. Does anyone know if any multimode polarization maintaining fibre products available?
Edit: the paper was approved so if you want to see it just message me :)
I'm writing a paper on a multimodal active sham device for placebo interventions with electrostimulators. We believe it has a low manufacturing cost, but it's probably better to have some baseline for comparison. Have any of you ever requested a manufacturer to produce a sham replica of an electrostimulator to be used on blind trials? If so, how much did it cost? Was it an easy procedure?
Basically, I have read about mulsemedia, more precisely, the MPEG-V standard. However, also we have multimodal applications and a standard, W3C Multimodal Interaction Framework.
I'd like to know if these concept are antagonistic or they have similarities.
We have proposed an algorithm for multiobjective multimodal optimization problems and tested it on CEC 2019 benchmark dataset. We need to show the results on some real problem also. Kindly help.
Hi ! does anyone know if I can directly buy core only glass fibers ? Meaning no cladding or coating ?
I ideally am looking for a multimode glass core for biosensing purposes.
I formulated chitosan nanoparticles from a 0.5% w / v chitosan solution and 0.5% w / v TPP. after adding the TPP solution drop by drop to the Chitosan solution, I obtained a turbid suspension. I centrifuged at 3500 rpm for 30min and a pellet formed which I was able to resuspend with an ultrasound probe. After the size measurement at the DLS I have a multimodal distribution with most of the particles having a radius greater than 400nm.
The actual registration process is far from optimal as you can see from the attached picture.
Any idea on how to improve the registration process result?
What is the bandwidth specification of standard multimode OFC cables that are available for communications and networking? will they carry all UV-VIS-IR spectrum?
- In the recent CV field, the world's top journals and conferences, we can see that many papers use multimodal / multi view information for 3D object detection.
- However, it is rare to use multimodal information for 2D object detection in autonomous driving scene. The most recent one is' seeing through fog without seeing fog. Bijelic et al., 2019 ', but this article is mainly about the contribution to adverse weather data set.
- How to improve the performance of 2D target detector through multimodal fusion in the automatic driving scene?
- Or, how to use depth information for 2D target detection？
Dear All, within our new European project SYN+AIR related with the air transport we are executing an online survey which aims at identifing the mobility choices related to and from the airport. We are glad to invite you fill in the survey https://ec.europa.eu/eusurvey/runner/SYN_AIR_Traveller_Survey_2021 The questionnaire is available in 5 languages (English, Greek, Spanish, Italian, Serbian) and lasts approximately 10 minutes. All adults that travel or used to travel by plane (before the Covid-19 pandemics) can answer this survey. You may find information related to the project at http://syn-air.eu/
Please, feel free to share/disseminate this request. Thanks a lot for your attention and contribution. #SESAR #H2020 #SYN+AIR
We have datasets that have a Gaussian distribution.
,Data were obtained from different, irregular, and multimodal Gaussian distributions
How can we use the k-means clustering method for highly optimal clustering so that the most statistically similar data are in the same group?
We have seen a stability in the supply chains of goods, food in particular, during the current pandemic of Covid19 continue, mostly undisturbed.
It is very reassuring at a time of uncertainty and macro-risks falling onto societies.
How much do we owe to the optimised management and supervision of Container transport, and multimodal support to it with deep sea vessels, harbour feeder vessels, trains and trucks/lorries?
What is the granularity involved? Hub to hub, regional distribution, local delivery?
Do we think that the connectivity models with matrices, modelling the transport connections, the flows per category (passengers, freight, within freight: categories of goods), could benefit from a synthetic model agreggation of a single matrix of set federating what has been so far spread over several separate matrices of numbers?
What do you think?
Below references on container transport, and on matrices of sets
A) Matrices of set
[i] a simple rationale
[ii] use for containers
 Generating scenarios for simulation and optimization of container terminal logistics by Sönke Hartmann, 2002
 Optimising Container Placement in a Sea Harbour, PhD thesis by by Yachba Khedidja
 Impact of integrating the intelligent product concept into the container supply chain platform, PhD thesis by Mohamed Yassine Samiri
When I use multimode fiber to guide the quantum dot ensemble photoluminescence signal to my Horiba FHR100 spectrometer. I always get a spectrum with periodic peaks as pic.jpg .
It looks like the peaks has a exact period of 1.5 nm.
Only when I guide the PL signal with a single-mode fiber canI get the smooth signal I wanted to see.
Could someone please help explain what the source of this periodic peak is and How can I avoid them .
I am working on my diploma thesis regarding eye endoscope. I would like to know more about the speckles in the multimode fiber. I would like to reduce speckles in MM fiber using vibration. I do not know why the vibration reduce the speckles and what happens with modes that are in the optical fiber.
Thank you for your answer!
I wanted to work on a comparison between the traditional class discourse interaction analysis and the new discourse version interaction system caused by the virus, and I also wanted to work on the part of the professors' opinions about the differences between these two discourse structures but I need some guidance to know poststructuralism or constructivism and in methodological frameworks, multimodal critical discourse analysis is the right one?. I have also doubted in the comparison that I should have one theory or draw results on two theories.
In advance thank you so much
I am working on my diploma thesis regarding eye endoscope. I would like to know more about the origin of speckles in the multimode fiber. I suppose that the speckles depend on fiber modes but I do not know why should high-order modes move with higher speed than low-order modes. And how does this fact influence the speckles.
Thank you for your answer!
Accurate image captioning with the use of multimodal neural networks has been a hot topic in the field of Deep Learning. I have been working with several of these approaches and the algorithms seem to give very promising results.
But when it comes to using image captioning in real world applications, most of the time only a few are mentioned such as hearing aid for the blind and content generation.
I'm really interested to know if there are any other good applications (already existing or potential) where image captioning can be used either directly or as a support process. Would love to hear some ideas.
Thanks in advance.
I have to analyze construction project (it comprises images and text) from the viewpoint of multimodal analysis. Who has got some theoretical unformation about it or samples of multimodal analysis? Thanks a lot
I need to know if MM8 or MFP-3D origin gives more reliable data for the nanomechanical measurements like nanotube stiffness. Moreover, which one best performs in liquid and can give nanoparticle protein interaction forces. If anyone can tell me about vibration sensitivity of these that would be great also.
Is it ok to say political caricatures instead of political cartoons in visual and multimodal metaphor?
Recently, I have been using the multimodal machine learning method to study the computer-aided diagnosis of cataract, but the data is not enough. Where can I find a multimodal data set? it is better to include image and structured data modalities.
I have a keystroke model which is one of the modes in my multimodal biometric system. The keystroke model gives me an EER of 0.09 using Manhattan Scaled Distance. But then I am normalizing this distance to fit in the range of [0, 1] using tanh normalization. And when I run a check on the normalized scores I am getting an EER of 0.997. Is there something I am doing wrong? The tanh normalization I am calculating based on the mean and std dev of matching scores for genuine users.
w/a = (0.65+(1.619./V.^(3/2)+2.879./V.^6)
Is this formula is also valid for step index multimoded waveguide for calculating the spot size of fundamental mode?
I am conducting research on Multimodal Discourse Analysis (MMDA) field. Which are the seminal works (books, papers, ...) in Multimodal Discourse Analysis (MMDA)?
I really appreciate knowing other researchers' point of view.
can we use score level fusion of Genuine and Imposter scores of a multi-Biomteric techniques to multimodal emotion recognition ?
I currently doing a research in fiber sensor using MMI, but i don't know how to determine the lenght of the multimode sensor. All reference that i read used the 4th self-image for determining the lenght of the multimode sensor, why the 4th self-image used ?
I'm doing some research on dimensionality reduction using swarm intelligent algorithms. As per the no free lunch rule, there is no algorithm that best suit all the problems. So, to be able to find the best subset I need to determine whether it's unimodal or multimodal? The data is of 300 features and 1000 instance. Is there any visualization methods that can help in this regard?
I made an experiment where I measure certain parameter a number of times. The result is over a 100 samples which distribution is not really normal. Due to properties of my specimens, results tend to gather around 3 or 4 modes. The distribution is multimodal. I would like to find the type A uncertainty of the measurement. When the distribution is normal, unimodal, the standard deviation is easily calculated. How to proceed when the distribution is multimodal? I found the stdv, same way as for unimodal, but I am not sure that this is correct way. Are there any dedicated standard deviation formulas for multimodal distribution? Even if I split my results into 3 or 4 separate unimodal sets, each with its own stdv, how to find the overall deviation?
I have known that the single-mode fiber (SMF) only allows the transmission of single-mode light, so we have to do mode-matching in SMF coupling. However, I am wondering what happens if we do not follow the strict mode coupling condition, we just couple all the energy into the SMF while do not follow the phase matching, how much loss will we get? Does the length of SMF have an influence on coupling efficiency in that situation?
Whether is it possible that we can still achieve less loss by a shorter SMF(with a range of few meters)?
In boundary condition ,it needs xmin xmax ymin ymax , should I put all of them pml or not?
And which size is enough for the FDE rectangular?
I am making a design where I have to splice PM single mode fiber to graded index multimode fiber, can it be possible with generic splicers or need some specialized one. what parameters should be considered to make an acceptable splice.
In problems with many local optima (multimodal) and many variables to optimize (multidimensional) which PSO variants are those that provide:
- better exploration capabilities at the beginning of the search,
- possibility of escaping local optima,
- capabilities to find the optimal solution when it is not at the center of the coordinate system,
- better quality of the final solution (more exploitation in the final period of the search process), and
- low computational load (less evaluations of the objective function, shorter computation times)
It is grateful that in your response the bibliographic source where the PSO version is published is informed.
In (Wu, Y, et al 2017), the authors use 2 kind of objective functions (MI and DTV) with 4 optimization methods.
The attached table shows objective function result and RMSE. BTW, I can't understand how CLPSO with MI, DE with MI, ACO with MI and LMACO with MI have Mean and Best result for DTV? Since it runs with MI. If the authors says CLPSO, DE, ACO and LMACO without MI will be correct but they link the result with MI.
I hope my inquiry clear.
I'm interested in visual and pictorial respresentations. Actaully, I have an idea that they have have a relationship with interextuality.
Please, there is any studies that focus on the intertextual analysis in the mutimodal (visual/pictorial) respresnetation, let me know.
Dear RG members,
Once I came across a software developed by Kay O Holloran to analyse moving pictures/videos. If you know such things, multimodal tools/models
I am simulation one fiber optic liquid level sensor where I am taking 1cm long multimode fiber. The cladding of the multimode fiber is removed by chemical etching process. For, measuring the liquid level, some portion of the fiber is immersed in the liquid and the remaining portion in in the air. Thus, the guided mode beam profile in the air-cladding section and that in the fluid-cladding section should be different. So there should be mode conversion loss.
Is there any theoretical formula to calculate such mode conversion loss.
In multimodal images, we use intensity variation (or difference) to describe different intensity levels of images represent same sense (captured by different sensors). Also, it has 3 types:
- non-linear intensity variation
- linear intensity variation
- local intensity variation
I read many papers but I still unable to find a proper definition or exact borders between these kind of intensity variations.
Please don't hesitate to send any hint that may guide me. I don't ask to suggest a paper just I need to know the different between them
Considering that for ethics reasons participants need to know that they are being recorded in a spontaneous video-mediated conversation (ex. a Skype call), I need to find relevant studies that show how/when/to what extent such data can be considered as reliable, or really spontaneous.
Any relevant research literature you can direct me to?
I'm looking for examples of how street protests can be analysed from the perspective of multimodal discourse analysis. What coding techniques and methods of analysis are effective in studying how solidarity of protest movements and political dissent are constructed by multimodal texts of slogans, banners, ribbons etc.
Suppose barium concentration in coarse, accumulation and quasi-ultrafine stages is 0.039, 0.189 and 0.056 microgram cubic meter, the what distribution pattern it follows?
I would like to ask about the following benchmark functions:
Are these benchmark functions unimodal or multimodal ?
Thanks to all who contribute to the answer.
Colleagues gave to our department the platereader, but the installation CD with the software (for winXP) was not in the box, and its presumably lost. We have looked for the original supply on the web, but there is no repository or tech support.
If someone could share (via mega or google drive) the software, or (god bless you) send a CD copy, our department and I myself personally would be really thankful.
Thanks in advance.
The results of DLS measurment always give us Lognormal Size Distribution and Multimodal Size Distribution. I do not know exactly what is the different between them? and Which one is better?. For my experiment, I want to survey the change of diameter of nanoparticle after adsorption an extra molecular.
Could everybody give me some advises?
Many thankful for !
I have a trimodal distribution and I would like to fit a model on it. I have already tried Gaussian Mixture Model (GMM) with different number of component to fit on distribution. The problem is whenever I ran GMM, I have got different weight and mean values for each modal in distribution. Is there any other reliable fitting model to fit on multimodal distributions.
I am working with our team to develop ERAS protocols for spine surgery patients and have found nothing in the literature specific to this patient population. any help would be appreciated. I plan to post what we develop for others to adopt to their particular place of practice.
I am trying to get decent power out of a fiber I am coupling to my laser (dynamic range: 100mW) but am having a hard time. The fiber is 400um multimode, 0.39 NA and is connected to an SMA adapter threaded to a mount whose angle you can adjust. The fiber post is mounted on a translational stage. I do not have any convex lenses between the fiber and lens at the moment because my spot size is already very, very small. But the power that comes out of my fiber when the laser is set to ~60mW is around 1uW... Any tips?
Consider the identification of non linear system whose model is known beforehand.The identification problem is now that of the parameter estimation.
This problem is solved using meta heuristics.
But the issue is whether the MSE landscape for the non linear dynamical system is multi modal??
Can we have a mathematical proof of it?
I have attached a paper which says that the MSE will be multi modal but I don't understand how to prove it .Please help.
i am using multimode 8, to study the piezohysteresis loop of the thin film. but i didn't get any good and flat curve. most of the time curve is collapsed and some time it's bit clear.
can anybody give me good suggestions/steps regarding this.
maybe it's little bit hard to get good piezoresponse curve for thin film case, but i think it shouldn't be collapse nature...
exactly don't know, some instrumental or scanning parameter issues.
I want to find out how many modes are present in data distribution. As per my search I found many methods for testing whether a distribution is unimodal or multimodal but I am interested in finding out number of modes available in distribution. Can any one suggest me how to estimate this?
Is there anything, for instance, like the findings about preference organisation in conversation analysis, or hypercorrect patterns in sociolinguistics, or semantic prosody in corpus linguistics?
It is well known, that co-expressive gesticulation gestures develop their structure of movement accompanying the verbal and melodic structure of the utterance and also that they are linked to it semantically and pragmatically.
When analysing gesticulation gestures linked to verbal language and intonation, it seems to me a little bit difficult to establish a close descriptive relation between the movement structure of gesticulation gestures and the verbal and melodic structure of the utterance in spontaneous speech.
I think, that one of these categories or aspects has to do with the close relation between the more prominent segment in pitch range of the utterance and the more prominent phase in gesticulation structure.
Could you suggest me other categories or aspects I should pay attention?
We are trying to coupling light from a multimode fiber (core diameter of 100-200 um) to a normal single mode fiber (9/125 um). A lens system was used but coupling loss is still very significant. How can I achieve efficient light coupling from multimode to single mode fibers? Any off the shelf products available?
i want to design a multimod bandpass filter as a quarter CPW waveguide microstrip line to reduce noice application.please if you know about it please share to me thanks a lot,or send any citation to me.
Currently, I am working on a research topic related to multimodal transportation in continuous hub location routing problem. I was wondering if there are any research papers, studying the impacts of using different modes of transportation on hub discount factor (alpha).
To be precise, how the value of alpha can be changed for different modes of transportation in a multimodal hub network
Addressing research papers would be highly appreciated.
How to numerically find the values of propagation constant of different modes inside a step index multimode fiber using eigen value equation??
In optical fiber, coupling co-efficient of each mode in a multimode fiber core has a finite value . Sometimes its value comes as (0.1+0.3*i), where i is the imaginary term. What is the physical interpretation of real and imaginary coupling co efficient at the multimode fiber core?
Can we explain the light guiding principle inside a single mode fiber by ray theory approach like a multimode fiber and how?
I have constructed a multimode fiber laser with the normalized frequency V of ~6, and the output mode is in the multimode regime. We can distinguish clearly the LP01 mode and the LP11 mode, but we are not sure the further high-order mode is the LP21 mode or the LP02 mode. We now don't have a beam profiler at this wavelength. Will the LP21 mode definitely oscillate first than the LP02 mode when the pump power is increased?
please tell me how to find the propagation constant of different LP(Linearly polarized) modes inside a multimode step index fiber. Is there any formula to calculate the value of propagation constant?? please suggest me some help
In the field of multiobjective evolutionary optimization and solving multimodal problems there many algorithms are introduced up until now but some like NSGA-I, NSGA-II,SPEA-I and SPEA-II are much more famous than others. In NSGA-II as I read in some papers the concept of crowding distance is used instead of fitness sharing that is used in its predecessor, NSGA-I.
I want to know the cons and pros of fitness sharing and crowding distance when you compare them and despite the lower order of calculation of NSGA-II, what is the drawback of using the crowding distance method in it?
Thanks in advance.
What is the physical limit of the edge steepness of a flat top beam Profile?
We image the output (IR ns pulses) of a multimode fiber to our target in order to drill holes. But the low steepness of the flat top causes flaking on the edges of the hole. The material is a mulilayer structure.
I like to know the physical relation between the steepness of the beam profile edges with respect to the fiber core diameter, NA and wavelength.