Science topic
Probabilistic Graphical Models - Science topic
Explore the latest questions and answers in Probabilistic Graphical Models, and find Probabilistic Graphical Models experts.
Questions related to Probabilistic Graphical Models
This question is related to making inferences from a complex graphical model. Graph databases are used to store and query the relationship between nodes in a graph. We need an open-source graph DB API to write our own probabilistic graphical models which we want to code our own inference engine.
I need to establish if there is a link between 2 columns from two different datasets with one matching column, where;
Dataset1: bipartite: (M, DS)
M DS
m23 ds3
m23 ds67
m54 ds325
... ...
Dataset2: tripartite: (M, G, DG)
M G DG
m23 g6 dg32
m23 g8 dg1
m54 g32 dg65
... ... ...
These 2 datasets have one column in common(i.e., **M**), and the relationship among the elements is shown below:
```
M ----affects----> G
M ----causes-----> DS
DG ----affects----> M
```
Primary Goal: To calculate the probability of a possible link/edge that might exist between indirectly related columns(eg. **DG** and **DS**) via the common column(**M**).
So, for a given list of DS entries, how to find the probability of the existence of a link/edge between
selected DS, and all the other DGs
```
DS <---- ----> DG
```
If DS; (ds3, ds67) were selected, the output should be like this:
element1 - element2 - probability/statistical value to signify the existence of direct relationship OR link.
```
ds3 - dg32 - 100% (common M value)
ds3 - dg1 - 100% (common M value)
ds3 - dg65 - 43.66%
---
ds67 - dg32 - 100% (common M value)
ds67 - dg1 - 100% (common M value)
ds67 - dg65 - 55.12%
```
I am trying to code this in Java, but Python based solutions can work too.
I am sorry I am not too familiar with graph theory, a little descriptive solutions would be really appreciated.
Thanks.
Hello,
a seemingly simple design question: The aim is to visualize the dependence of A and B by connecting A and B by a straight line (possibly with a label). The design options are: line type, line strength, text or symbolic label.
How would you visualize the "significance" and/or "strength" of the dependence?
Details:
- A and B are either independent (no line) or dependent. They are considered dependent if the likelihood of being independent (the p-value / "significance") is small (which corresponds in each setting to a certain value of a test statistic).
- The "strength" of dependence of A and B might be given on a scale, e.g. [-1,1] if one considers classical correlation.
(The use of colour is a further design option, which breaks down in black and white print. Therefore it was excluded.)
### all below can be skipped, it provides only further details for the reader interested in the background of the question ###
The detection of dependence and its quantification are usually separate procedures, thus a mixture of both might be confusing...
Background:
Apart from many other new contributions the paper arXiv:1712.06532
introduces a visualization scheme for higher order dependencies (including consistent estimators for the dependence structure).
Based on feedback there seems to be a tendency to interpret the method/visualization by a wrong intuition (rather than by its description given in the paper)... so I wonder if this can be moderated by an improved visualization.
If you want to test your intuition use in R:
install.packages("multivariance")
library(multivariance)
dependence.structure(dep_struct_several_26_100,alpha = 0.001)
dependence.structure(dep_struct_star_9_100,alpha = 0.01)
dependence.structure(dep_struct_ring_15_100,alpha = 0.01)
# which performs dependence structure detections on sample datasets
The current visualization does NOT include the "strength" of dependence, but that's what some seem to believe to see.
The paper is concerned with dependencies of higher order, thus it is beyond the simple initial example of this question. But still, it depicts dependencies by lines and uses as a label usually the value of the test statistic. Redundancy is introduced by using colour, line type and in certain cases also the label to denote the order of dependence.
It seems that using the value of the test statistic as label causes irritation. The fastest detection method is based on conservative tests, in this setting there is a one-to-one correspondence (independent of sample sizes and marginal distributions) between the value of the test statistic and the p-value - thus it provides a very reasonable label (for the educated user). In general the value of the test statistic gives only a rough indication of the significance.
A further comment to the distinction between "significance" and "strength": In the paper also several variants of correlation-like measures are introduced, which are just scaled version of the test statistics. Thus (for a fixed sample size and fixed marginals) there is also a one-to-one correspondence between the "strength" and the conservative "significance". These measures also satisfy certain dependence measure axioms. But one should keep in mind that these axioms are not sufficient to provide a sensible interpretation of different (or identical) values of the "strength" in general (e.g., when varying the marginal distributions). ... that's why currently all methods are based on "significance".
Is the canonical unit 2 standard probability simplex, the convex hull of the equilateral triangle in the three dimensional Cartesian plane whose vertices are (1,0,0) , (0,1,0) and (0,0,1) in euclidean coordinates, closed under all and only all convex combinations of probability vector
that is the set of all non negative triples/vectors of three real numbers that are non negative and sum to 1, ?
Do any unit probability vectors, set of three non negative three numbers at each pt, if conceveid as a probability vector space, go missing; for example <p1=0.3, p2=0.2, p3=0.5>may not be an element of the domain if the probability simplex in barry-centric /probabilty coordinate s a function of p1, p2, p3 .
where y denotes p2, and z denotes p3, is not constructed appropriately?
and the pi entries of each vector, p1, p2 p3 in <p1, p2,p3> p1+p2+p3=1 pi>=0
in the x,y,z plane where x =m=1/3 for example, denotes the set of probability vectors whose first entry is 1/3 ie < p1=1/3, p2, p3> p2+p3=2/3 p1, p2, p3>=0; p1=1/3 +p2+p3=1?
p1=1/3, the coordinates value of all vectors whose first entry is x=p1=m =1/3 ie
Does using absolute barry-centric coordinates rule out this possibility? That vector going missing?
where <p1=0.3, p2=0.2, p3=0.5> is the vector located at p1, p2 ,p3 in absolute barycentric coordinates.
Given that its convex hull, - it is the smallest such set such that inscribed in the equilateral such that any subset of its not closed under all convex combinations of the vertices (I presume that this means all and only triples of non negative pi that sum to 1 are included, and because any subset may not include the vertices etc). so that the there are no vectors with negative entries
every go missing in the domain in the , when its traditionally described in three coordinates, as the convex hull of three standard unit vectors (1,0 0) (0,0,1 and (0,1,0), the equilateral triangle in Cartesian coordinates x,y,z in three dimensional euclidean spaces whose vertices are Or can this only be guaranteed by representing in this fashion.
Suppose I have a Bayesian network, and I want to know the strength of the predictions along each arc. For example, if Node A predicts Node B ( meaning, if there is a directed edge from A to B), then if I want to know how strong is this prediction, what do I do ?. Is it the frequency of that edge among all the network models considered by the algorithm ? or is it something else ? How do I calculate this value ? I used bnlearn package in R to learn the network.
In case of PGM, how much sample size is enough ?
I want to create an algorithm to auto-populate the CPTs using some trial runs of a Monte-Carlo simulation to determine the values in all the cross-state cells of the Conditional Probability Table used in the Bayes Inference.
Pros and cons between probabilistic reasoning and fuzzy logic
Is there exist at least 9 distinct a bi-j-ective, diffeomorphic& homoeomorphic. analytic functions F(x,y), dom i of two variables in the x,y, cartesian plane
F(x,y); dom F:[0,1]\times[0,srt(3)/2],\toΔ^2, unit 2 probability simplex
CO-DOM(F)=IM(F)=delta_2: {p(i)=<p_1,p_2,p_3>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=1, 1>=p_1i,x_p2i,x_3_i>=0\in [0,1]\forall (x1,x2,x3)\in [0,1]
F(0,0)=(1,0,0)
F(1,0)=(0,1,0)
F(1/2, sqrt(3)/2)=(0,0,1)
F(x=1/2,y=sqrt(3)/6))=(1/3,1/3,1/3)_(1/2, sqrt(3)/6) =
The inverse function being
i=(x,y)=F-1(<p1i,p2i,p3i>_i=(x,y))= <x=[2p2+p3+1]/2,y= p3 *[sqrt(3)/2]]
F(x=1/2,y=sqrt(3)/6))=(1/3,1/3,1/3)_(1/2, sqrt(3)/6) ;
Incidentally it also has to accomodate the 8 element boolean algebra of events on each pt as well <p1,p2,p3>, pi1+p2+p3=1 pi>=0 omega=1, emptyset =0,
PR(A or B)=PR(A)+PR(B)= p1+p2 >=0
and <p1+p2, p2+p3, p3+p4)
ie F(x,y)= <p1,p2,p3>,\to <1,0, p1, p2, p3, p1+p2, p2+p3, p3+p1}
and as every element of the simplex must be present at very least six times,
It actually must consist of at least six identical simplexes, that where the euclidean
F(x,y, {1.2,3.4.5.6})=Δ^2\cup_i=1-6, these can be the same simplex but with the order, of each pi in each <p1, p2, p3> interchanged
<Omega={A,B,C}, F= {{Omega},{ emptyset}, {A}, {B}, {C}, {A V B}, {AVC}, {B VD)
PR(A)=p1
PR(B)=p2 PR(C)=p3
PR(A or B)=PR(A)+PR(B)= p1+p2 >=0
PR(A or C)=PR(A)+PR(C)=p2+p3
PR(B or C)=PR(B)+PR(C)=p1+p3
where in addition there is a further triangle probability function that is ranked by this chances. The triangle frame function G(x,y,1,6)={{1,0, g1,g2,g3, g1+g2, g2+g3, g1+g3}, 1<=gi, g1+g2, g2+g3, g3+g1>=0, g1+g2 +g3=1;
such that for F(x,y,{i,6] on the same coordinates, \forall Ei\in(A, B, C,A or B, A or C, B or C)
{{1,0, g1,g2,g3, g1+g2, g2+g3, g1+g3}= [G(x,y,{1,,,6,1), G(x,y,{1,,,6},2), G(x,y,{1,6},3), G(x,y,{1,,,6},4), G(x,y,{1,,,6},5),..... G(x,y,{1,,,6},8),]
; G(x,y,{1,,6},1)=1, G(x,y,{1,,,6},2)=0
1>G(x,y,{1,,6},3)=g1,>0 in the interior
1>G(x,y,{1,,6},4)=g2 >0,
1>G(x,y,{1,,6},5)=g3>0
1>G(x,y,{1,,6},5)=g1+g2>0, >{g1,g2}
1>G(x,y,{1,,6},7)=g2>0
1>G(x,y,{1,,6},8)=g1+g3>0, g1+g3>g3, g1+g3>g1
1>G(x,y,{1,,6},7)=g2+g3>0, g2+g3> (g2, g3)
g1+g2+g3=1
1>G(x,y,{1,,6},5)=g1+g2>0
\forall i in [i in 6} G(0,0,i,)=(0,0,1),
G(0,0,i\in [1,6},3)=G(0,0,i\in [1,6},, 1),=G(0,0,i\in [1,6},5)=G(0,0,i\in [1,6},8}=1
G(0,0,i\in [1,6},2)=G(0,0,i, 4),=G(0,0,i\in [1,6},{7}=,G(0,0,i\in [1,6},{8})=0
\forall i in [i in 6} G(1,0)=(0,1,0)
G(1/2, sqrt(3)/2)=(0,0,1)=
G(x=1/2,y=sqrt(3)/6, {1,,,6}))=(1/3,1/3,1/3)=(1,0,1/3.1/3.1/3. 2/3,2/3.2/3}
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, G(x,y,i,j)=gj>1/3, iff F(x,y,i,j)=pj>1/3,
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, G(x,y,i,j)=gj>1/3, iff F(x,y,i,j)=pj>1/3,
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, G(x,y,i,j)=gj=1/2, iff F(x,y,i,j)=pj=1/2,
\forall {x,(y)}\forall,l {i....6},\forall j \in {1,,,8}, F(x,y,i,j)=pt<|=|>F(x1,y,1i1,t)=pt[\forany {x1,(y1)}\foranyl,l1 {i....6},forany t \in {1,,,8}, iff G(x,y,i,j)=gj|<|=|> G(x1,y1,i1,t)=gt, ,
\forall {x,(y)}\forall,l {i....6},\forall t \in {1,,,8}, \forall {x1,(y1)}\forall,i1 {i....6},\forall t_1 \in {1,,,8}, such that
F(x,y,i,,t1)+F(x1,y1,i1,t2)=p_t(y,x,i)+p_t1(x1,x1,i1)<|=|> F(x2,y2,i2, t3)+F(x3,y3,i3,t4) =p_t3(x2,x2,i2)+ p_t4(x3,x3,i3)\forany{x2,(y2),i3},(x3,y3,i3}dom(F) such that, forany (t2,t3) \in {1,,,8}, where t2 @,sigma F(x2,y2,i3_, t3 in sigma @,F( x3,y3, t3) iff
G(x,y,i,,t1)+G(x1,y1,i1,t2)=g_t1(y,x,i)+g_t2(x1,x1,i1)<|=|> G(x2,y2,i2,j,t3)+ G(x3,y3,i, t4)=g_t3(y2,x2,i2)+g_t4(x3,x3,i3)
where
, iff F(x,y,i,j)=pj=1/2,
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, 0<G(x,y,i,j)=gj<1/3, iff 0<F(x,y,i,j)=pj<1/3,
\forall {x,(y),l {i....6},\forall j \in {1,,,8},2/3 <G(x,y,i,j)=gj>1/3, iff 2/3>F(x,y,i,j)=pj>1/3,
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, G(x,y,i,j)=gj=2/3 iff F(x,y,i,j)=pj=2/3}
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, 1>G(x,y,i,j)=gj>2/3 iff 1>F(x,y,i,j)>2/3
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, 1>G(x,y,i,j)=gj>0iff 1>F(x,y,i,j)>0
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, 1>G(x,y,i,j)=gj= 0 iff F(x,y,i,j)=0
\forall {x,(y),l {i....6},\forall j \in {1,,,8}, 1>G(x,y,i,j)=gj= 1 iff F(x,y,i,j)=1
forall
_(1/2, sqrt(3)/6) =
{{1,0, p1,p2,p3, p1+p2, p2+p3, p1+p3}
\forall Et\in(A, B, C,A or B, A or C, B or C), \forall Ej\in(A, B, C,A or B, A or C, B or C)
ie for t\in {1,......8), j in {1,,,,,8}
G(,x,y,{i,,,,6},t,) @ x,y,i>G(x,y,{1,,,6},j) or G(,x,y,{i,,,,6},t,)=G(x,y,{1,,,6,j,) or G(,x,y,{i,,,,6},t,) @ x,y,i<G(x,y,{1,,,6},j)
G(,x,y,{i,,,,6},t,)_t @ x,y,i>G(x,y,{1,,,6},j) iff F((x,y, {i,6},t)>F(x,y,{i,,6},j)= PR(F(x,y,{i,,6})>PR(x,y,{i,,,6}
G(Ei)=G(Ej) iff P(Ei)>PR(Ej) ie g1=g2 iff p1=p2,
g1+g2= g3 iff p3=p1+p2,
G(Ei<G(Ej) iff P(Ei)>PR(Ej)
G(0,0)=(1,0,0)
G(1,0)=(0,1,0)
G(1/2, sqrt(3)/2)=(0,0,1)=
F(x=1/2,y=sqrt(3)/6))=(1/3,1/3,1/3)_(1/2, sqrt(3)/6) =
where on each vector, it is subject to the same constraints F-1{x,y,i} G(A)+G(B)+G(C)=1, G(A v B)=G(A)+G(B), G(sigma)=1, G(emptyset)=0 etc whenver G-1(x,y, i \in {1,,6})=F-1(x,y,{i,6}
for all of the 8 elements in the sigma algebra of each of the uncountably many vectors in the interior of each of the six simplexes of uncountably many vectors
and all elements F_i in the algebra of said vector in each in simplex, except omega, 0, G(
Such in addition every element of Δ^1, the unit one probability simplex, set of all two non non negative numbers which sum to one, are present and within the image of the function; described by triples like (0, p, 1-p) on the edges of the triangle in cartesian coordinates
to, the unit 2 probability simplex
consisting of every triple of three real non-negative numbers, which sum to 1. Is the equilateral triangle, ternary plot representation using cartesian coordinates over a euclidean triangle bi-jective and convex hull. Do terms p[probability triples go missing.
I have been told that in the iso-celes representations (ie the marshak and machina triangle) that certain triple or convex combinations of three non -negative values that sum to one are not present.
Simply said, does there exist a bijective, homeomorphic (and analytic) function F(x,y)of two variables x,y, from the x-y plane to to the probability 2- simplex; delta2 where delta2, the set ofi each and every triple of three non negative numbers which sum to one <p1, p2, p3> 1>p1, p2 p3>=0; p1+p2+p3=1
F(x,y)=<p1, p2, p3> where F maps each (x,y) in dom(F) subset R^2 to one and only to element of the probability simplex delta2 subset (R>=o)^3; and where the inverse function, F-1 maps each and every element of delta 2
<p1, p2, p3>;p1+p2+p3 P1. p2. p3. >=0 , that is in the ENTIRE probability simplex, delta 2 uniquely to every element of the dom(F), the prescribed Cartesian plane.
Apparently one generally has to use a euclidean triangle, with side lengths of one in Cartesian coordinates, often an altitude of one however is used as well according to the book attached attached, last attachment p 169.
(which suggests that certain elements of the simplex will go missing there will be no pt in Dom (F), such F(x,y)=<p1,p2, p3> for some <p1,p2,p3> in S the probability simplex
is in-vertible and has a unique inverse, such that there exists no <p1, p2, p3> in the simplex such that there is no element (xi,yi)of dom(F) such that F(xi,yi)= <p1, p2, p3> in
F(x,y), that is continuous and analytic
map to every vector in the simplex, ie there exists no set of three non negative three numbers p1, p2 p3 where p1+p2 +p3=1 such that
ie for each of the nine F,
Where CODOM(F)=IM(F)=delta_2: {p(i)=<p_1i,p_2i,p_3i>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=1, 1>=p_1i,x_p2i,x_3_i>=0}
where p(i( described a triple and i whose Cartesian index is i= (x,y), ie F(x,y)=p(i)<p_1i,p_2i,p_3i>_i).
and
,
CO-DOM(F)=IM(F)=delta_2: {p(i)=<p_1,p_2,p_3>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=1, 1>=p_1i,x_p2i,x_3_i>=0\in [0,1]\forall (x1,x2,x3)\in [0,1]
F(0,0)=(1,0,0)
F(1,0)=(0,1,0)
F(1/2, sqrt(3)/2)=(0,0,1)
(with a continuous inverse)he car-tesian plane, incribed within an equilateral triangle to the delta 2,
coDOM(F)=IM(F)=delta_2: {p(i)=<p_1,p_2,p_3>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=1, 1>=p_1i,x_p2i,x_3_i>=0\in [0,1]\forall (x1,x2,x3)\in [0,1]
where, no triple goes missing, and where delta 1, the unit 1 probability simplex subspace, (the set of all 2=real non negative numbers probability doubles which sum to one, described as triples with a single zero entry),
delta _1 subset IM(F)=codom(F)=Dela_2
and each probability value in [0,1] , that is each and every real number in [0,1] occurs infinitely times many for each of the p1-i, p2_2, p3_3 , on some such vector,
1. and one each degenerate double
2, And which contains, as a proper subset, the unit 1 probability simplex, delta 1 (set of all probability doubles)contained within the IM(F)=dom(F)=delta2; in the form of a set of degenerate triples, delta_1*, subset delta 2=IM(F)=codom(F) ( the subset of vectors in the unit 2 (triple) probability simplex with one and only one, 0, entry),
ie <0.6, 0.4,0>, <0, 0.6, 0.4>
3. where for each pi /evctor (degenerate triple) in the degenerate subset delta 1 * of delta 2; the map delta1*=delta 1 is the identity (that is no double goes missing). The unit 2 simplex (set of all real non negative triples= im(F) must contain along the edges of the equilateral, every element of delta 1, every set of two number which sum 1).
4. And where among-st these degenerate vectors in delta 1* (the doubles inscribed as triples with a single 0), (not the vertices), must contain, for each, and every of the two convex, combinations, or positive real numbers in the unit 1 probability simplex (those which sum to one) at each of them, at least three times, such that every real number value p in (0,1), such that :
p1 +1-p =1, occurs at least six times, among-st six distinct degenerate, double vectors <p1, p2,. p3}{i\in {1...6}
that for all p in (0,1)and there exists six distinct degenerate triple vectors mapped to six distinct points in the plane
5. in addition in must contain the unit 2 simplex as the sum of the entries in each triple.
ie among-st the triples <p1, p2, p3> in IM (F) it must be that \for reals, r, in (0,1) and for each possible value of p1+p2 assumes that value , infinitinely many times,
p2+p3 on a distinct vector assumes that value r in (0,1) infinitely many times.
p1+p3 must assume that value r in (0,1) infinitely many times, , Corresponding there must not exist some real value in (0,1) such that one of p1, p2 p3 assumes that value e, and moreover, not infintiely times, and but no sum value on some vector (ie element of F(p1+p2, p2+p3, p3+p1) that also assumes that value and infinitely times. The entire unit interval of values must for each such rin [0,1]and for each of three distinct sums in Fsimplex must be contained and assumed individuallyt infinitely many
6, Finally for every element of a 2or more distinct vectors such that the two elements (p1, p2 , p3) sumto one
such that for any given p1\in vector 1 , p2\in vector 2 ,p1+p2=1
for any given p1 in vector 1 p3 in vector 2, p1+p3=1
for any given p1 in vector 1 p1in vector 2, p1+p1=1
such that for any given p2\in vector 1 , p2\in vector 2 ,p2+p2=1
for any given p2 in vector 1 p3 in vector 2, p2+p3=1 ,
for any given p3 in vector 1 p3 in vector 2, p3+p3=1
6, Finally for every 3 elements (,p1 p2 , p3, p1, +p2, p3+p1, p2+ p3) in 2 or 3or more distinct vectors v1, v2, v3, such that the three elements in the three distinct vectors,in sum to any of these numbers \forall n \in {\forall n\in {1....48}n/28,1, 4/3, 1.25, 1.5, 4/3. 1.75, 1.85, 2}, or such that or any 2 elements (p1, p2 , p3, p1 +p2 p1+p3, p2+p3) in common vector v1 and another for all possible other elements in (p1, p2 , p3, p1 +p2 p1+p3, p2+p3) in distinct vector v3 sum to those values these must be present
Moreover it must also at least extend to any given 4/5/6/7/8 /9/10distinct elements of (p1, p2 , p3, p1 +p2 p1+p3, p2+p3) that sum to for all possible combinations of being in up to 10 distinct vectors, 9 distinct vectors two elements in common,,,,,,,,,,,,,,,,, such that for any such one of all such combinations there must be uncountable many versions of each element of p1, p2 , p3, p1 +p2 p1+p3, p2+p3) in each combination individually for each of the above sum values & for each of the above cardinalities of entries,.
More over the entire simplex must be present for each of foralll of the ten possible combinations or sum term number values,10
and for all r each different combinations of elements that sum to that those values in (p1, p2 p3, p1+p2.....)
and for each of the distinct number of distinct vectors that could be present in that sum upl to ten
and, that could sum to each of all those approximately 70 distinct values. that could to those values, and for each of the different number of terms in each sum,. the entire simplex must be present and every such value in [0,1[ must be assumed individually for every term in every
(1)for each of the term length sum (any given 2 that sum to one, any given three which sum to one, any given four that sum one m any given five which sum to one, any three which sum to 2, any given four which sum to 2 any five which sum to 2, any given four which sum to three, any five which sum to 3
(2)for all of the 70 or so , values mentioned d
(3)for all number of distinct vectors of which those terms are in that sum are associated
(4) for all 6 distinct terms types of in the sum p2 p3, p1+p2.....)
forall n \in {\forall n\in {1....48}n/28,1/2, 2/3, 1,1.125, 4/3, 1.25, 1.5, 5/3. 1.75, 1.877,11/6, 2,2.25, 2.33, 2.5, 2.66, 2.75, 3, 3.25, 3.33 ,3.5,3.666.3.75, 4, 3.333, 4.5, 4.666, 5, 5.5, 6, 6.5, 7 7.333, 7.5, 7,666, 8, }
IE there vcant be any GAPS
elements elements such that in four or five distinct vectors with no elments in common
, three or four distinct vectors, two elements in a common vector,the other three/2 being in distinct vectors when there are fouir elements
three distinct vectors, with two elements in two common vectors, or three elements in one vector common vector, and two and the other two in either one or two common vectors and the other elements or in 2 elements in one common vector and one in a common vector, 2 distinct vectors with 2 elements common to each of either one or /two of the vectors, 2 vectors with 3 elements in one vectors and 2 in the other
n a distinct vectors,
such p1 vector 1 +p2 in vector 2 +p3 in vector 3=1
such p1 vector 1 +p3 in vector 2 +p2 in vector 3=1
such p1 vector 1 +p2 in vector 2 +p2 in vector 3=1
such p1 vector 1 +p3 in vector 2 +p2 in vector 3=1
such p1 vector 1 +p1 in vector 1 +p1 in vector 2
such p1 vector 1 +p1 in vector 1 +p3 in vector 2
such p1 vector 1 +p1 in vector 1 +p2 in vector 2
such p1 vector 1 +p1 in vector 1 +p2 in vector 3
such p1 vector 1 +p1 in vector 2 +p2 in vector 3
such p1 vector 1 +p1 in vector 3 +p2 in vector 3
such p1 vector 1 +p3 in vector 1 +p2 in vector 3
such p1 vector 1 +p2 in vector 1 +p2 in vector 3
such p1 vector 1 +p3 in vector 1 +p2 in vector 3=1
such p1 vector 1 +p2 in vector 2 +p2 in vector 3=1
such p1 vector 1 +p2 in vector 3 +p3 in vector 3=1
such p1 vector 1 +p1 in vector 2 +p1 in vector 3=1
such p1 vector 1 +p1 in vector 2 +p1 in vector 3=1
such p2 vector 1 +p2 in vector 2 +p2 in vector 3=1
such p3 vector 1 +p3 in vector 2 +p3 in vector 3=1
such p3 vector 1 +p3 in vector 1 +p3 in vector 3=1
such p3 vector 1 +p3 in vector 2 +p3 in vector 2=1
such p3 vector 1 +p3 in vector 3 +p3 in vector 3=1
such p3 vector 1 +p3 in vector 2 +p3 in vector 2=1
such p3 vector 1 +p3 in vector 3 +p3 in vector 3=1
such p2 vector 1 +p1 in vector 2 +p3 in vector 3=1
such p2 vector 1 +p1 in vector 2 +p1 in vector 3=1
such p2 vector 1 +p3 in vector 2 +p2 in vector 3=1
such p2 vector 1 +p2 in vector 2 +p2 in vector 2=1
such p1 vector 1 +p2 in vector 1 +p2 in vector 2=1
such p1 vector 1 +p2 in vector 1 +p3 in vector 3=1
such p1 vector 1 +p2 in vector 1 +p3 in vector 3=1
such p1 vector 1 +p2 in vector 1 +p3 in vector 3=1
uch that for any given p1\in vector 1 , p2\in vector 2 ,p1+p2=1
for any given p1 in vector 1 p3 in vector 2, p1+p3=1
for any given p1 in vector 1 p1in vector 2, p1+p1=1
such that for any given p2\in vector 1 , p2\in vector 2 ,p2+p2=1
for any given p2 in vector 1 p3 in vector 2, p2+p3=1 ,
for any given p3 in vector 1 p3 in vector 2, p3+p3=1
for any given p1 in vector 1 p1+p2 in vector 2 p1+(p1+p2)=1
for any given p1 in vector 1 p1+p3 in vector 2 p1+(p1+p3)=1
for any given p1 in vector 1 p2+p3 in vector 2 p1+(p2+p3)=1
for any given p2 in vector 1 p1+p2 in vector 2; p2+(p1+p2)=1
for any given p2 in vector 1 p1+p3 in vector 2; p2+(p1+p3)=1
for any given p2 in vector 1 p2+p3 in vector 2 p2+(p2+p3)=1
for any given p3 in vector 1 p2+p3 in vector 2 p3+(p2+p3)=1
for any given p3 in vector 1 p2+p3 in vector 2 p3+(p2+p3)=1
for any given p3 in vector 1 p2+p3 in vector 2 p3+(p2+p3)=1
for any given p1+p2 in vector 1 p1+p2 in vector 2 p1+p2)+(p1+p2)=1
for any given p1+p2 in vector 1 p1+p3 in vector 2 p1+p2)+(p1+p3)=1
for any given p1+p2 in vector 1 p2 +p3in vector 2( p1+p2)+(p2+p3)=1
for any given p2+p3 in vector 1 p3 +p1in vector 2 (p2+p3)+(p3+p2)=1
for any given p2+p3 in vector 1 p2 +p3in vector 2
for any given p1+p3 in vector 1 p3+p1in vector 2
for any given p2+p1 in vector 1 p3 +p2in vector 2
for any given p2+p1 in vector 1 p3 in vector 2
such that for any given p2\in vector 1 , p1\in vector 2 ,p1+p2=1
for any given p1 in vector 1 p2 in vector 2, p1+p3=1
for any given p1 in vector 1 p3in vector 2, p1+p1=1
plus each of the
such that for any given p3\in vector 1 , p2\in vector 2 ,p1+p2=1
for any given p3 in vector 1 p1 in vector 2, p1+p3=1
for any given p3 in vector 1 p3n vector 2, p1+p1=1
for all of the 36 or distinct sombination such that p1 in one vector and one of (p1, p2, p3,p1+p2 p2+p3, p3+p1) on a disitnct vector sum to one , for all reals in [0,1]each of these combination must obtain infinitely many times, each oif these combinations must be surjective/ bijective with regard to the unit 1 probability simplex, and for each such combination listed, each of the two terms must assume individually ,for each of the uncountably many real values i m [0,1],, uncountably many times.
for each of the way, and each of the 36 values in each of that one the 36 and obtain infinitely many time, assume each value within in [0,1] infinitely many time,, and
In additions
p2\in vector 2 , p3\in vector 2
, p3 p1
, p3+p1=1, p2+p1=1, p3+p2=1,
p2+p3=1
where these are entries on distinct vectors each of these entries must contain infinitely many distinct probability
to either 1, 2, 3, or the
Moreover, it must that both horizontally and vertically, the entire set of element in the sum to 2 non negative simplex
Where CODOM(F)=IM(F)=delta 1, it must be such that the for bijective function G which for each element of Im (F) , G: maps <p1,p2, p3> \in IM (F)=delta 2 ,to G(<p1+p2, p2+p3, p3+p1)>
, ie F(x,y)=<p1,p2, p3) then G[(x,y))=G(F-1(p1,p2, p3)])=<p1+p2, p2+p3, p1+p3>
such that G is also a bi-jective and analytic diffeomorphism onto
delta_2: {p(i)=<p_1i,p_2i,p_3i>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=2, 1>=p_1i,x_p2i,x_3_i>=0}, the set of all three real number that sum to 2.
ie dom (G)=dom (F) and and thus for any<p1, p2 p3,> domain we compute F-1(,p1, p2, p3) to get the cartesian coordinates of that vector and feed them into G, where G computes the probabilities of the disjunctive events
(unit 2 probability simplex)
that delta_2: {p(i)=<p_1i,p_2i,p_3i>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=2, 1>=p_1i,x_p2i,x_3_i>=0}
in these sums in this sense.
Moreover, it also be the case, that there must exist
, p2+p3, p1+p3
see (4) below
i=(x,y), and i(2)=(x2,y2); . i(3)=(x3, y3), i(4)=(x4, y4), i(5)=(x5, y5), i(6)=(x6,y6);
(x6,y6)\neq (x5, y5)\neq (x4, y4)\neq(x3,y3)\neq(x2,y2)\neq (x,y)
where
F(x2, y2)= p(i(2)=<p_1(i2)=p, p_2(i2)=0, p_3_(i2)=1-p>{i2}; p1+p3=1; 0>(p3, p1)<1, p2=0, p1=p
F(x3, y3)= p(i(3)=<p_1(i3)=p, p_2(i3)=1-p, p_3_(i3)=0>^{i3}; 0>(p1, p2) <1, p1+p2=1, p3=0, p1=p
F(x4, y4)= p(i(4)=<p_1(i4)=0, p_2(i4)=p, p_3_(i4)=1-p>{i4); 0>(p3, p2) <1, p3+p2=1, p1=0, p2=p
F(x5, y5)= p(i(5)=<p_1(i5)=1-p, p_2(i5)=p, p_3_(i5)=0>^{i5) 0>(p1, p2) <1, p1+p2=1, p3=0, p2=p
F(x6, y6)= p(i(6)=<p_1(i6)=1-p, p_2(i6)=0, p_3_(i6)=p>{i6) 0>(p1, p3) <1, p1+p3=1, p2=0, p3=p
F(x, y)= p(i)=<p_1(i)=0, p_2(i)=1-p, p_3_(i)=p>{i);
0>(p3, p2) <1; p3+p2=1, p1=0, p3=p
where for all i\in {i,i(1)...i(5)} and p_t1(m)+p_t2(m) +pt3(m)1 etc
,p1(i)+p2(i) +p3(i)=1
p1(i2)+p2(i2) +p3(i2)=1,
such that p_j_i(2)\in p(i(2); p_j_i(2)\=p
\oplus p2 oplus p3 =0, and
where ONE and only of p1, p2, p3 =0, where that precise values occurs at least twice in the first entry of two distinct vectors,
<0.6, 0.4, 0>
<0.6, 0, 0.4>, at least twice in the second entry, p2, here p2=0.6
<0, 0.6, 0.4)
<0.4, 0.6, 0)
and in the third entry p3, p3=0.6; at least twice
<0.4,6, 0.6)
<0, 0.4, 6)
. In other words \forall (p1)\in (0,1) and for all p2=in \in (0,1), amd for all p3 \in (0,1), where p2+p3=1, there exists two distinct vectors (if only in name) such that <p1=0, p2, p3=1-p2), and < p1=0, p2=1-p3, p3>
there exists
\forall p\in [0,1] ; 0<p1, p2<1; <p, p2=1-p>
possible degenerate triple combinations, for each degenerate convex combination in for all real positive p ; 0<p<1 p, 1-p
<0.6, 0.4, 0> <0.4, 0.6, 0> <0.6, 0.4, 0>, <0, 0.6, 0.4)
[delta_1* ]subset delta2=IM(F)=codom(F)= {p(i)=<p_1,p_2,p_3>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=1, 1>=p_1i,x_p2i,x_3_i>=0, & \exists in p(i), one and only one(p2_i, p1_i, p3_i)=0; where the other two entries \neq 0, s.y1< p1 p2>0}
for all convex combinations in delta 1 , all possible probability doubles, tuples element of [0,1]^2, of non negative real numbers which sum to 1
where map G(delta1*)=delta1 is the identity
or some subset of I^2\subset R^2 to the unit probability simplex, (the triangle simplex of all triples of non-negative numbers <=1, which sum to one.?
Are such functions convex, that those which use absolute bary-centric coordinates over the probability simplex, when defined over an equilateral triangle with unit length in the Cartesian plane.
seehttps://en.wikipedia.org/wiki/Affine_space#Affine_coordinates
I presume that such function are hardly homogeneous in that infinitely many possitive triples will not be present?
F:I^2, to {<x_1,x_2,x_3>_m; x1+x2+x3=1, (x_1,x_2,x_3)\in [0,1]\forall (x1,x2,x3)\in [0,1]}
from the set of all or triples {<x_1,x_2,x_3>_m; x1+x2+x3=1, (x_1,x_2,x_3)\in [0,1]\forall (x1,x2,x3)\in [0,1]} to a unique index m,\in I^2, a real interval in the cartesian plane?
dom(F):[0,1]\times[0,srt(3)/2]
IM(F)=F: {p(i)=<p_1,p_2,p_3>_i; |\forall p(i)\in [0,1]^3, s.t: p1_i+p2_i+p3_i=1, 1>=p_1i,x_p2i,x_3_i>=0\in [0,1]\forall (x1,x2,x3)\in [0,1]}
where
F(0,0)=(1,0,0)
F(1,0)=(0,1,0)
F(1/2, sqrt(3)/2)=(0,0,1)
F(x=1/2,y=sqrt(3)/6))=(1/3,1/3,1/3)_(1/2, sqrt(3)/6) =
ie x=2p2+p3]/2,= [2 times 1/3 +1/3]/=1/2
y=srt(3)/2-sqrt(3)/2p1-sqrt(3)/2 times p2= sqrt(3)/2*(p3)= sqrt(3)/2*(1/3)=sqrt(3)/6
i=(x,y)=F-1(<p1i,p2i,p3i>_i=(x,y))= <x=[2p2+p3+1]/2,y= p3 *[sqrt(3)/2]]
, ill have to check the properties there a lot of other roles that it has to fulfill then just this.
Where in addition, every no value of x1+x2, x2+x3, x3+x1 can be missing these must assume each value in [0,1], prefererably infinitely many if positive and <1, and cannot assume a value, that is not assumed by one of the x1,x2, x3, somewhere in the structure,
Preferably this must property contain the unit 1 simplex, as a function x1, x2+x3, where every convex combination which sum to one, of two values must be assumed, by x1, 1-x, on distinct vectors <x1,x2,x3>m
x2, 1-x2=x1+x3, <x1,x2,x3>m_1 ,m_1\neq m
x3,1-x3 =x2+x3=<x1,x2,x3_m2, m2\neq m1\neq m,
There also cannot be any mismatched between element of the domain on distinct vectors, ie diagonal or vertical sums, where any two of them sum to one, any three of that sum to 1, or 2,
<0.6, 0.25, 0,15>
<0.4, 0.32, 0.28>
<0.26, 0.4, 0.34>
<0.3 0.4 , 0.3>
<0.26, 0.38, 0.36> <0.35, 0.35, 0.3>, <0.26,0.4, 28>
I presume if its convex it would contain the doubly stochastic matrices or the permutation matrics
,
<0.7, 0.25, 0,15>
<0.8, 0.32, 0.28>
<0.5, 0.32, 0.28> must be a vectors <0.3=1-0.7x1, 0.5=1-x2=0,5, x1=1-0.8=0.2)
and conversely for <x1,x2,x3> there must be triad of three distinct vectors, such that one elements =1-x1, , another =1-x2, and another =1-x3
<x1,x2,x3, >
<y1,y2,y3>, where one of y1,y2,y3, = 1-x1, one of z1,z2,z3, =1-x2,
<z1,z2,z3>
there must be distinct vectors such that <x1,x2,x3>, x1=25+0.32,
as well for x2, x3, x1+x2, x3+x2, x1+x3,
and which sum to 0.15+0.28, and common vectors, where all of the six events, or rather 12 events , whose collective sum <= 1, lie on a common vector as atomic events <x1,x2,x3> or disjunctive events <x1+x2,x2+x3, x3+x1>
ie a vectors <0.4, 0.25, 0.35>, and one where <x1=0.4, x2, x3> where x1+x2=0.6
< 0.4, 15, 0.45>, <0.6, 0.28,0.12>
<x1,x2,x3> where x1+x2=0.4
, any 'two sums';, three, sum of two elements, to one, or any three sum sum to one
, or three elements of distinct vectors which sum to 2,
and the set of three non positive numbers which sum to 2, as s
Where m denote a Cartesian pair of points in the x,y, plane which uniquely denotes a specific vector, built over an equilateral triangle in Cartesian coordinates,
and with unit length in side,in Cartesian coordinates
-where this is distance in Cartesian coordinates (x,y) in euclidean norm of each Cartesian coordinate probability vectors vertex = F-1(1,0,0),F-1(0,1,0),F-1(0,0,1)=1, where the respective euclidean norm in probability coordinates is clearly sqrt (2) sqrt sqrt(1-0)^2+ (1-0)^2+(0-0)^2)in
sqrt (2) in probability coordinates, in 2- norm,1, in 1 norm,
distance from the triangle center/circum-centre and vertexes, (the Cartesian coordinates of the mid spaced probability vector (1/3, 1,3, 1.3)whose distance from each vertex as a euclidean norm in probability =2/3=equal to the 1-norm distance between value in the triple and its relevant vertex)=2/3, where the overall 1-norm difference in probability between the cent-roid and each vertex =4/3,
= 1/sqrt (3), in Cartesian coordinates in euclidean norm,
all three altitudes=medians (distance from each vertex in Cartesian coordinates, to center of each opposing side of the triangle)-in 2-norm again=
=,sqrt(3)/2
the probability vertices whose Im(F)=(1,0,0),(0,1,0),(0,0,1) whose untoward cartesian coordinates are given above.
and the three, apothem (2-norm distance from the cir-cum center, the Cartesian coordinates (1/2,sqrt(3)/6) of the centroids/probability vector, (1/3,1.3, 1.3),) and the Cartesian coordinates of the mid point of each side of the equilateral triangle,
= 1/(2 * sqrt(3))
where the Centro id (1/3,1/3,1/3 )is the vector in n simplex, entries are just the average , n-pt average of the unit (the only vector with three values precisely the same, and the sums precisely the same 2,3,2.3,2,3)
mid probability vector all entries 1/n here 1/3, whose Cartesian coordinate are the circumcentre of the triangle, the point were all three medians cross, (ie the Cartesian point equidistant from each vertex.
that being the circumcentre , (1/2, sqrt(3)/6) =in 2-norm of the (the pt whose probability coordinates are just the average for an n simplex of a unit vector (1/n, 1/n,1.n)
denoting the distance between the cen are
F-1(1/n, 1/n, 1/n) here F-1(1/3, 1.3, 1.3)
probability/bary-centric coordinates, with side lengths, between the vertics of sqrt (2)
length sqrt(2) in probability coodinates (euclidean norm ) ie sqrt ([1-0]^2+[0-1)^2+ [0,-0,]^2)=sqrt(2), and have side lengths =1 in cartesian coordinates, with distance from the centre 1/sqrt (3), andall median/altitudes /angular bisectors/perpendicular bisectors=sqrt(3)/2, area=sqrt(3)/4 and apothem=1/2sqrt(3)
F(x,y)=<x-1,x_2,x_3>m=(x,y)
Is there a distinction between strong or complete qualitative probability orders which are considered to be strong representation or total probability relations neither of which involve in-com parables events, use the stronger form of Scott axiom (not cases of weak, partial or intermediate agreements) and both of whose representation is considered 'strong '
of type (1)P>=Q iff P(x)>= P(y)
versus type
(2) x=y iff P(x)=P(x)
y>x iff P(y)>Pr(x) and
y<x iff P(y)<Pr(x)
The last link speaks about by worry about some total orders that only use
totallity A<=B or B<=A without a trichotomy https://antimeta.wordpress.com/category
/probability/page/3/
where they refer to:
f≥g, but we don’t know whether f>g or f≈g. S
However, as it stands, this dominance principle leaves some preference relations among actions underspecified. That is, if f and g are actions such that f strictly dominates g in some states, but they have the same (or equipreferable) outcomes in the others, then we know that f≥g, but we don’t know whether f>g or f≈g. So the axioms for a partial ordering on the outcomes, together with the dominance principle, don’t suffice to uniquely specify an induced partial ordering on the acti
.
The both uses a total order over
totality
A <=B or B >=A
l definition of equality and anti-symmetry, A=B iff A<=B and B>=A
A<= B iff [A< B or A=B] iff not A>B
A>=B iff [A>B or A=B]iff not A<B
where A>B equiv B<A,and
A>=B equiv B<=A iff (A<B)
where = is an equivalence relation, symmetric, transitive and reflexive
<=.=> are reflexive transitive, negative transitive,complementary and total
, whilst <, > are irreflexive and ass-ymetric,
transitive
A<B , B<C implies A>C
A<B B=C implies A>C
A<B, A<=B implies A>C
and negatively transitive
and complementary
A>B iff ~A<~B
<|=|>, are mutually exclusive.
and where equality s, is an equivalence class not denoting identity or in-comparability but generally equality in rank (in probability) whilst the second kind uses negatively transitive weakly connected strict weak orders,r <|=|>,
weak connected-ness not (A=B) then A<B or A> B
whilst the second kind uses both trichotomous strongly connected strict total orders, for <|=|>,.
(2) trichotomoy A<B or A=B or A>B are made explicit, where the relations are mutually exclusive and exhaustive in (2(
(3) strong connectected. not (A=B) iff A<B or A> B, and
and satisfy the axioms of A>= emptyset, \Omega > emptyset , \Omega >= A
scotts conditions and the separability and archimedean axioms and monotone continuity if required
In the first kind <= |>= is primitive which makes me suspect, whilst in the second <|=|> are primitive.
Please see the attached document.And whether trich-otomoy fails
in the first type, which appears a bit fuzzier yet totality holds in both case A>=B or B<=B where
What is unclear is whether there is any canonical meaning to weak orders (as opposed total pre-orders, or strict weak orders) .
In the context of qualitative probability this is sometimes seen as synonymous with a complete or total order. , as opposed to a partial order which allows for incomparable s, its generally a partial order, which allows for comparable equalities but between non identical events usually put in the same equivalence class (ie A is as probable as B, when A=B, as opposed, one and the same event, or 'who knows/ for in-comparability) Fihsburn hints at a second distinction where A may not be as likely as B, and it must be the case
not A>B and not A< B yet not A=B is possible in the second yet
A>= B or A<=B must hold
which appears to say that you can quasi -compare the events (can say that A less then or more probable, than B ,but not which of the two A<B, A=B, , that is which relation it specifically stands in
but yet one cannot say that A>B or A<B
)
and satisfy definitions
and A<=B iff A<B or A=B iff B>=A, iff ~A>=~B, where this mutually exclusive to A<B equiv ~B>~A
A>=B iff A>B or A<=B
iff iff B>=A where this mutually exclusive to A>B equiv ~B<~A
and both (1) and (2) using as a total ordering over >= |<=
(1)totalityA<= B or B<=A
(2)equality in rank and anti-symmetric biconditional A=B iff A<=B and B>=A where = is an equivalence relation, symmetric, transitive and reflexive
(2) A<=B iff A<B or A=B, A>=B iff A>B or A<=B
(3) and satisfy the criterion that >|<|>=|<=, are
complementary, A>B iff ~B<~A
transitive and negatively transitive,
where A<B iff B<A and where , =, <|> are mutually exclusive,
The difference between the two seem to be whether A>=B and A<= B is equivalent to A=B; or where in the first kind, it counts as strongly respresenting the structure even if A>=B comes out A>B because one could not specify whether A>B or A=B yet you could compare them in the sense that under <= one can say that its either less or equal in probability or more than or equal, but not precisely which of the two it is.
either some weakening of anti-symmetry of the both and the fact that the first kind use
whilst the less ambiguous orders trich-otomous orders use not (A=B) iff A<B or A> B; generally trichotomy is not considered, when it comes to using satisfying scotts axiom , in its strongest sense, for strict aggreement
and I am wondering whether the trich-otomous forms which appear to be required for real valued or strictly increasing probability functions are slightly stronger, when it comes to dense order but require a stronger form of scotts axiom, that involves <. > and not just <=.
but where in (1) these <=|>= relation is primitive and trich-otomoy is not explicit, nor is strong connected-ness whilst in (2)A neq B iff A>B or A<B
>|=|< is primitive and both
(1) totality A<= B or B<=A
(2) A<B or A=B or A>B are made explicit, where the relations are mutually exclusive and exhaustive in (2(
and (2) trichotomy hold and are modelled as strict total trichotomous orders,
as opposed to a weakly connected strict weak order, with an associated total pre-order, or what may be a total order,
, or at least are made explicit. I get the impression that the first kind as deccribed by FIshburn 1970 considers a weird relation that does not involve incomparables, and is consided total but A>=B and B<=A but one cannot that A is as likely as B, or that its fuzzy in the sense
that one can say that B is either less than or equal in probability to A, or conversely, but if B<= A one cannot /need not whether say A=B or A<B,
not A=B] iff A<B or A>B
and strongly connected in the second.
where A=B iff A<=B and B>=A in both cases
where <= is transitive , negative transitive, complementary, total, and reflexive
A>=B or B<=A
are considered complete
and
y
I was wondering if someone knows whether Mathematica allows one to plot another 'probability function over the unit 2 simplex (it can be expressed as a function of two arguments subject to certain contrainsts function .
Where I am taking the domain to be of the function to be over vectors in the 2- standard simplex itself as it were,subject to certain contrain'ts.
Does it actually have a closed form expression as a function x,y coordinates. as a function of two arguments. I presume mathematica can allows you plot it .and optimize certain functions over tenary plot, ternary graph, triangle plot, simplex plot, G? Is that correct
I read on the literature that a replicator neural network is a particular feed-forward network which is trained by replicating input data points as desired outputs.
This network is claimed to be effective at detecting outliers, as less frequent patterns will result in higher regression error than most frequent ones.
However it seems to me that mapping each data point into itself can be trivially acieved by the identity function, with zero regression error for all data points.
What am I missing, then?
I thank you in advance.
I need a step by step procedure of implementing Bayesian Network for two large dataset to predict interactions (that is patterns and associations).
Also need the breakdown of the Bayesian network machine learning approach to the form that is suitable for coding.
I am have some well-grounded knowledge in Bayesian Inference, Linear mixed models, and probabilistic graphical models. Image processing is a new learning topic for me.
I have gone through many research papers looking for hybrids , but have only found hybrids for classifier data i.e Naive Bayes and Logistic Regression Model. Or Could you help me in understanding how the NB-LR hybrid is made .(Working on R)
Thank You for the help
For image processing, unlike the methods which split the image into pitches, total variation is on the whole image. In this case, for an image with size n by n, what is the complexity of total variation minimization? Is total variation too slow compared to the pitch based method? Thanks.
Assuming there are two different datasets. We want to find out if there exists interaction (association) between the two datasets.
How can we model this problem using Bayesian Network
hi, i am using KNN classifier for my work. there it is said that the k value should be odd. like 1,3,5,7 and need to know why the k value should be odd numbers. pls help me in this regard
Except the mixture models (like GMMs), are there any other useful "parametric" methods for learning (estimation) of the probability distribution (density function) of continuous random variables? (Assuming that the distribution is not limited to a specific family).
Could someone give a list of the most important methods in this topic?
I am especially interested in network-based methods.
My data lie in the unit interval (proportion). I assume them to be drawn from a beta distribution with parameters a and b. Are there recommendations for the choice of the priors for a and b ?
Thank you!
Does the numbers of trials are not related to the percentages of probabilities?
I was comparing between these two Matlab codes, and have a question about which one would have higher success rate of probabilities.
Why is the second code has higher number of probabilities than the first?
I would like to know how the relationship among words in a corpus is represented using probabilistic graphical model.
Hello everyone,
In real life, a knowledge structure corresponding to a subject will be huge. It might be possible that I may not have the data (response frequency) for all the knowledge states. Or data might be too big to handle at the same time.
Say, I have 100 items in the bigger/ original knowledge structure but I am having data(surmise relations among these items and response frequencies corresponding to knowledge states containing these items) corresponding to only these 10 items. I want to estimate the parameters(guessing and careless error) for these 10 items. Can I estimate the parameters for these 10 items using the knowledge states involving only these 10 items and use these parameters for bigger knowledge structure?
Consider the attached graphs, in which one is a bigger structure and other is a small part of the bigger structure. Now in this case, there are 8 items in the small structure. If I consider the precedence relation among the items in small structure and I have the knowledge states corresponding to these 8 items. Then can I estimate the item parameters of these items? Will it be theoretically justifiable?
I have been using PLSA to reduce the dimension of image BOV models due to its ability to capture the co-occurrence of visual-words. However, the linearity of PLSA limits its accuracy when applied to the classification of image collections with high and medium complexity. I have recently discovered that Deep Learning algorithms provides non-linear approaches for machine learning tasks. which of the deep learning approaches is the most suitable for dimension reduction?
Hello, I have some questions about Gaussian Mixture Models (GMM), more specifically, if I want to find the parameters (i.e., the mean and the covariance matrix for each Gaussian) for a set of points how can I proceed?
It's possible to obtain these parameters using K-means, and then getting the mean and the covariance matrix for each cluster?
Can be the probability of a given Gaussian to be determined by:
number of points in a cluster / total number of points
?
probabilistic neural network approach
In my work, I want to use Gaussian mixture model in speaker identification. I use Mel frequency cepstral coefficient (MFCC) to extract the feature extraction of the training and testing speech signal and I use obj= fitgmdist(X,K) to estimate the parameter of Gaussian mixture model for training speech signal. I use [p, nlogl]=posterior(obj, testdata) and I choose the minimum (nlogl) to show the maximum similarity between reference and testing models as shown in matlab attach file.
The problem in my program is the minimum nlogl changes and it recognizes different speaker even if I use the same testing speech signal. For example, when I run the program for the first time, the program recognize that the first testing speaker has the maximum similarity with training speech signals (I=1) and If try to run the program again for the same testing speech, I will get the five testing speaker have the maximum similarity with training model . I do not know what is the problem in the program and why the program gives different speaker when I run the program for three times for the same testing speech signal .can any person specialize in speaker regonition system and Gaussian mixture model answer about my question
With best regards
This bayesian network is for designing pathway for calculating the chances of tissue specific cancer using microRNA .
I would like to know various researchers who are involved in the field of PGM. I am aware about a few but I am facing difficulties in finding all the professors who are actively involved in the field as this area is pretty much new compared other fields.
I want to use Conditional Random Fields for Classification.
In more detail I want to classify every amino acids of a protein sequence as boundary domain or not.
I read the background theory but I didn't found any practical examples.
From a feature tree either a feature gets selected or it doesn't get selected. It's more binary than probabilistic. Still, if I want to represent a feature tree by a probabilistic network what approach should I follow? Can anyone help, please?
I am looking for solutions similar to leaky Noisy-OR gates in Bayesian networks, that allow for handling uncertainty caused by small amount of training data (see attached publication).
The solution I am looking for does not have to refer to Bayesian networks.
I would appreciate some literature references, or just a key-words so I know what to search for.
I'm currently working on models presenting high or infinite dimensionality, missing observations and non linearity. These issues make direct use of numeric approaches infeasible. I'm trying to find regularities and approximations in a symbolic form. I need support for basic operations like marginalization or Bayes rule and exploiting the properties of probability distribution (both general (>0 and sum up to 1) and specific (gaussian, poisson..). I need it to check that I am not missing and doing any transcription error. I know matlab has a symbolic extension, and I used a lot of years ago mathematica. But I did not find online any example using the specific features for probabilistic modelling that I need. Any suggestion is welcome
I know one successful attempt one-class SVM but I'm rather interested in neural approaches over the problem like reconstruction networks or other fundamental neural methods. Are there any other functional examples?
I need some information on Gaussian Graphical Models (GGM).
1. From literature I see that it is from the family of Undirected Graphical Models. Is it acyclic too? is there any method of retrieving the direction of arcs between the nodes in GGM?
2. A popular method for establishing the arcs between nodes is through the estimation of Inverse Covariance Matrix i.e. IC matrix. Are their any other method to find out the arcs between nodes?
3. Can we not derive the causality of nodes in GGM? If yes then please give some lit source.
4. What are the methods of inference in GGMs?
I am in deep need of a framework for calculating joint and conditional probability tables from a simple array of multivariate data. I need these for computing in probabilistic graphical models. Can anybody help me out?