GNGTS 2022 - Atti del 40° Convegno Nazionale
208 GNGTS 2022 Sessione 2.1 I(EQ;EB) = ΣΣ {0,1} P(x,y) [log 2 P(x,y) – log 2 P(x) – log 2 P(y)] = ΣΣ {0,1} P(x,y) log 2 P(x,y) – Σ x={0,1} log 2 [P(x)] Σ y={0,1} P(x,y) – Σ y={0,1} log 2 [P(y)] Σ x={0,1} P(x,y), and being Σ x P(x,y) = P(y) I(EQ;EB) = ΣΣ {0,1} P(x,y) log 2 P(x,y) – Σ {0,1} P(x) log 2 P(x) – Σ {0,1} P(y) log 2 P(y) = H(EQ) + H(EB) – H(EQ,EB), (2) that is the difference between the sum of EQ and EB entropies and the joint case: it is 0 when EQs and EBs are independent events, while it is > 0 when the events share some information, being H(EQ,EB) < H(EQ) + H(EB). Relation (2) is symmetric resulting in that I(EQ;EB) = I(EB;EQ). Being so, the mutual information is not able to detect the direction of the information flux between the events. An asymmetric measure, named time-delayed mutual information (Jin et al. , 2006), can be obtained if an investigator is interested to elucidate when events present themselves in time domain by introducing a time-lag parameter Δt between events EQ and EB in (1). It describes the shared information between EQs and EBs at different times, introducing a consequentiality between two sets of events in a statistical way: I(EQ;EB+Δt) = ΣΣ {0,1} P(x,y+Δt) log 2 {P(x,y+Δt)/[P(x)P(y)]}, (3) being P(y+Δt) = P(y). Time-delayed mutual information works also when non-linear links characterize interactions between two event time series. However, shared history and common external driving effects between two processes seem to escape from identification by means of mutual information (Bossomaier et al. , 2009). Thus, the idea is to use the dynamic of events considering transition probabilities P(x+1|x), where +1 is referred to the next time step, and the entropy rate h(E) = H(E+1) – H(E) = – H(E+1|E). Then, in analogy with relation (2) and using the joint probability properties H(EQ+1|EQ) + H(EB|EQ) – H(EQ+1,EB|EQ) = = – ΣΣ {0,1} P(x+1,x) log 2 P(x+1|x) – ΣΣ {0,1} P(y,x) log 2 P(y|x) + ΣΣΣ {0,1} P(x+1,y,x) log 2 P(x+1,y|x) = ΣΣΣ {0,1} [– P(x+1,x,y) log 2 P(x+1|x) – P(y,x,x+1) log 2 P(y|x) + P(x+1,y,x) log 2 P(x+1,y|x)] = ΣΣΣ {0,1} P(x+1,y,x) [log 2 P(x+1,y|x) – log 2 P(x+1|x) – log 2 P(y|x)] = ΣΣΣ {0,1} P(x+1,y,x) log 2 {P(x+1,y|x)/[P(x+1|x) P(y|x)]}, (4) which is the information difference between dependent and independent transitions indicated by P(x+1,y|x) and P(x+1|x) P(y|x), respectively. It is the conditional mutual information I(EQ+1;EB|EQ) and is called transfer entropy TE EB->EQ . Finally, being P(EQ ∩ EQ+1 ∩ EB+Δt) = P(EQ ∩ EQ+1|EB+Δt)P(EB+Δt), (5) where P(EB+Δt) = P(EB), the transfer entropy can be obtained starting from conditional probabilities. This prospective looks like handy when considering the correlation results between strong
Made with FlippingBook
RkJQdWJsaXNoZXIy MjQ4NzI=