^{1}

^{1}

We elaborate on an alternative representation of conditional probability to the usual tree diagram. We term the representation “turtleback diagram” for its resemblance to the pattern on turtle shells. Adopting the set theoretic view of events and the sample space, the turtleback diagram uses elements from Venn diagrams—set intersection, complement and partition—for conditioning, with the additional notion that the area of a set indicates probability whereas the ratio of areas for conditional probability. Once parts of the diagram are drawn and properly labeled, the calculation of conditional probability involves only simple arithmetic on the area of relevant sets. We discuss turtleback diagrams in relation to other visual representations of conditional probability, and detail several scenarios in which turtleback diagrams prove useful. By the equivalence of recursive space partition and the tree, the turtleback diagram is seen to be equally expressive as the tree diagram for abstract concepts. We also provide empirical data on the use of turtleback diagrams with undergraduate students in elementary statistics or probability courses.

Conditional probability [

Let A and B be two events, then the conditional probability of A given B is defined as

ℙ ( A | B ) = ℙ ( A ∩ B ) ℙ ( B ) . (1)

Our experience with undergraduate students is that a major difficulty in understanding and working effectively with conditional probability lies in the level of abstraction involved in the concepts of “event” and “conditioning”; see also [

The focus of this article is on productive visual representations for the understanding and application of conditional probability. The significant role of visual representation in mathematics is well-established; see, for example, [

Tomlinson and Quinn [

“Conditional probability is a difficult topic for students to master. Often counter-intuitive, its central laws are composed of abstract terms and complex equations that do not immediately mesh with subjective intuitions of experience. If students are to acquire the mathematical skills necessary for rational judgement, teaching must focus on challenging the personal biases and cognitive heuristics identified by psychologists, and demonstrate in the most accessible way―the power of probabilistic reasoning.” (p. 7)

Documented student difficulties with conditional probability can be summarized as one of three main types [

1) Interpreting conditionality as causality.

2) Identifying and describing the conditioning event.

3) Confusing ℙ ( A | B ) and ℙ ( B | A ) .

Tarr and Jones [

Tree diagrams have been used by many to help understand conditional probability. The idea of a tree diagram is to use nodes for events, the splitting of a node for sub-events, and the edges in the tree for conditioning. For example,

To address issues with the tree diagram, let us re-examine the idea of graphical visualization. There are two important ingredients (or steps) in visualizing an abstract mathematical concept. One is a concrete graphical representation of the target mathematical objects. This step would offload part of the burden of the brain by concrete graph objects, without which one has to keep relevant abstract mathematical objects in the brain and gets ready for subsequent mathematical operation. The second is that, the mathematical concept or operation can be understood or achieved by a simple operation on the graphical objects. This is the step to be carried out in the brain, and preferred to be simple (or at least conceptually simple). If a balance could be achieved between these two ingredients in visualizing a mathematical concept, then the graphical tool would be successful. This explains why the Venn diagram has been so successful since it was introduced, and has now become the standard graphical tool for set theory. Essentially, the Venn diagram converts the set objects to graph objects in such a way that many set relationships or operations could be accomplished by “reading”

the diagram―the mathematical operation is done directly by the human visual system, instead of having to invoke both the visual system and the brain. On the other hand, for the tree diagram, each of the two ingredients does some job but there is room for improvement.

The turtleback diagram we propose tries to optimize the two steps involved in the design of a graphical tool for conditional probability. In particular, it views events and the sample spaces as sets, and uses elements from Venn diagrams―set intersection, complement and partition―for conditioning, with the additional notion that the area of a set indicates probability whereas the ratio of areas associated with relevant sets indicates conditional probability. Once parts of the diagram are drawn and properly labelled, the calculation of conditional probability involves just simple arithmetic on the area of relevant sets. This makes it particularly easy to understand and use for problem solving.

There have been several prior attempts to represent conditional probability visually [

This graphical model, for facilitating a visually moderated understanding of conditional probability, described in [

Tomlinson and Quinn visualize compound events A ∩ B , A ∩ B ¯ as nodes of a tree (see

Yamagishi [

“The graphical nature of [roulette-wheel diagrams] take advantage of people’s automatic visual computation in grasping the relationship between the prior and posterior probabilities.” (p. 105).

and provides experimental evidence that use of roulette-wheel diagrams increases understanding of conditional probability beyond that for tree diagrams. In this regard, Sloman et al. [

“The studies reported support the nested-sets hypothesis over the natural frequency hypothesis. .... The nested-sets hypothesis is the general claim that making nested-set relations transparent will increase the coherence of probability judgment.” (p. 307)

“Iconicity” is the lowest of Terrence Deacon’s three levels of symbolic interpretation^{1} [

based. An icon is a form of graphical representation that requires no significant depth of interpretation: an icon brings to mind, without any apparent intermediate thought, something that it resembles. For example, the diagram in

A modified version of Brase’s question is as follows:

“A new test has been developed for a particular form of cancer found only in women. This new test is not completely accurate. Data from other tests indicate a woman has 7 chances out of 100 of having cancer. The test indicated positively only 5 of these women as having cancer. On the other hand, the test indicated a positive result for 14 of the 93 women without cancer.

Janine is tested for cancer with this new test. Janine has probability―of a positive result from the test, with a probability―of actually having cancer.”

An iconic representation for this problem is shown in

The strength of such iconic representations is that they reduce the calculation of probabilities to simple counting problems and, as Brase [

Our focus is on how to represent an event graphically, how to relate it to the sample space, how to express the notion of conditioning such that it would be easy to understand the concept of conditional probability, to gather pieces of information together, and to solve problems accordingly.

We start by treating the sample space (denoted by Ω ) and events as sets, and in terms of graph, as a region and its sub-regions, similarly as in a Venn diagram. Assume the region representing the original sample space Ω has an area of 1. To simplify our discussion (or to abuse the notation), we will use a label, say B, to denote the region associated with event B. Note that here the label can be either a single letter, or several letters (such a case indicates the intersection of events. For example, a label AB indicates the intersection of events A and B and thus that of regions A and B). Similarly we can use the union of two regions (viewed as sets) to represent the union of two events. Other operations of events

can also be defined accordingly in terms of set operations; we omit the details here. To quantify the chance of an event, we associate it with the area of the relevant region. For example, ℙ ( B ) is indicated by the area of region B .

The centerpiece in “graphing” conditional probability is to express the notion of conditioning. This can be achieved by re-examining the definition of conditional probability as given in (1). It can be interpreted as follows. Let A be the event of interest. Upon conditioning, say, on event B , both the new effective sample space and event A in this new sample space can be viewed as their restriction on B , that is, Ω becomes Ω ∩ B = B and A becomes A ∩ B , respectively. The conditional probability ℙ ( A | B ) can now be interpreted as the proportion of the part of A that is inside B (i.e., A ∩ B ) out of region B , that is,

ℙ ( A | B ) = area of region A ∩ B area of B . (2)

Now we can describe how to sketch a turtleback diagram. We start by drawing a circular disk which represents the sample space Ω . Then we represent events by partitioning the circular disk and the resulting subregions. To facilitate our discussion, we define the partition of a set [

We will use

B = ( A ∩ B ) ∪ ( A ¯ ∩ B ) , (3)

which can be represented by splitting the region for B , i.e., “abcda”, with a straight line “db”. The conditional probability ℙ ( A | B ) can then be calculated as the ratio of the area for region “bcdb” and that for region “abcda”.

The turtleback diagram leads to a partition of the sample space Ω as follows

Ω = B ¯ ∪ B (4)

= B ¯ ∪ ( A ∩ B ) ∪ ( A ¯ ∩ B ) . (5)

Continuing this process, we can define events as complicated as we like in a simple hierarchical (recursive) fashion as a nesting sequence of partitions P 0 ≻ P 1 ≻ P 2 ≻ ⋯ where P 0 = { Ω } , P 1 = { B , B ¯ } , and P i + 1 is a refinement of P i for index i > 0 in the sense that each element in P i + 1 is a subset of some element in P i .

We can now assign labels to each of the sub-regions, e.g., by the name of the relevant events to indicate that a particular region is associated with that event. For example in

One advantage of such a recursive-partition representation of the sample space Ω is that the data are now highly organized and we can easily operate on it, for example to find out the probability of a certain event. The idea of organizing the data via recursive space-partition and manipulating by their labels has been explored in CART (classification and regression trees [

This example is taken from online sources (see [

“According to the Arizona Chapter of the American Lung Association, 6.0% of population have lung disease. Of those having lung disease, 92.0% are smokers; of those not having lung disease, only 24.0% are smokers. Answer the following questions.

1) If a person is randomly selected in the population, what is the chance that she is a smoker having lung disease?

2) If a person is randomly selected in the population, what is the chance that she is a smoker?

3) If a person is randomly selected and is discovered to be a smoker, what is the chance that she has lung disease?”

According to the information given in the problem, we can sketch a graph as

1) The answer is simply the area of region abda, which is 6 % × 92 % = 0.0552 .

2) The answer is the area of region edbae, which is 6 % × 92 % + 94 % × 24 % = 0.2808 . This is, in essence, the total probability formula ℙ ( S ) = ℙ ( L ∩ S ) + ℙ ( L ¯ ∩ S ) .

3) Recognizing that this involves conditional probability and is the ratio of two relevant areas, (area of abda/area of edbae) = 0.0552/0.2808 = 0.1966.

The Venn diagram is known as the standard graphical tool for set theory. Both Venn diagram and the turtleback diagram use regions to represent sets. However, there is a major difference. In a turtleback diagram, as illustrated in

In

Given a graphical representation, it is natural to ask questions about its expressive power―will it be expressive enough to represent a complicated or very abstract concept? We will show that the turtleback diagram is equally expressive as the tree diagram.

The way that the turtleback diagram progressively refines the partition over the sample space is essentially a recursive space partition, where the sets involved in the partition are organized as a chain of enclosing sets. For example, in

( A ∩ B ) ⊆ B ⊆ Ω and ( A ¯ ∩ B ) ⊆ B ⊆ Ω (6)

By equivalence (see, for example, [

1) The root node corresponds to the sample space Ω ;

2) All the child nodes of a node form a decomposition of this node;

3) Down from the root node, the nodes along any path form a chain of enclosing sets.

Property 2) entails the total probability formula, and property 3) corresponds to a refinement of a partition. This allows one to turn the turtleback diagram in

For real world conditional probability problems, often the following formula is used instead of (1), due to availability of information from multiple sources

ℙ ( A | B ) = ℙ ( A ∩ B ) ∑ i ℙ ( B ∩ A i ) (7)

where ∑ i A i = Ω . This requires the calculation of probabilities in the form of ℙ ( B ∩ A i ) , or in other words, the probability of the intersection of multiple events.

In

Thus, in

ℙ ( Ω B A ) = ℙ ( Ω ∩ B ∩ A ) = ℙ ( B A ) = ℙ ( B ) ⋅ ℙ ( A | B ) , (8)

which is simply the product of edge weights along the path Ω → B → A (the edge weight for Ω → B is ℙ ( B ) ). Same reasoning extends to any node in a

tree. Thus we have provided a tree-based interpretation of the turtleback diagram for conditional probability. Such an algebraic system on the tree has the following two properties:

1) The probability of arriving at any node equals the product of edge weights along the path.

2) The weight of an edge H → L has weight given by ℙ ( L | ∗ , ⋯ , H ) .

This is exactly what a tree diagram would represent. The above properties extend readily to a series of events. For example, the probability of a series of events, B → C → D can be computed as the probability of arriving at node D along the tree path ⋆ → B → C → D (c.f.,

ℙ ( B ∩ C ∩ D ) = ℙ ( ∗ → B ) ⋅ ℙ ( B → C ) ⋅ ℙ ( C → D ) = ℙ ( B ) ⋅ ℙ ( C | B ) ℙ ( D | B , C ) . (9)

This approach applies even for non-sequential events, as one can artificially attach an order to the events according to the “arrival” of relevant information. Thus, we have shown the semantic equivalence between the turtleback diagram and the tree diagram. Their difference is mainly on the visual representation, which matters as visual tools.

The tree diagram appears to be less intuitive than the turtleback diagram as there is no longer an association between the area of a region and its probability (one may use the thickness of an edge to indicate the probability, but that is less attractive too). However, the tree diagram seems to scale better to large problems.

We consider four examples in case study, including the “Lung disease and smoking” example, the “History and war” example, the “Lucky draw” example, and “the urn model” example [

With the tree diagram, the answer to (1) is the probability of reaching node S along the path ⋆ → L → S , which is the product of edge weights along this path and is calculated as 6 % × 92 % = 0.0552 . The solution to (2) is the sum of products of edge weights along two paths, ⋆ → L → S and ⋆ → L ¯ → S , that is, 6 % × 92 % + 94 % × 24 % = 0.2808 , and (3) by the ratio of the product of edge weights along path ⋆ → L → over that over two paths, which is 0.0552/0.2808 = 0.1966 (

This example is artificially created so that it has a similar problem structure as the “Lung disease and smoking” example. It is described as follows.

“According to a market research about the preference of movies, 10% of the population like movies related to history. Of those who like movies related to history, 90% also like movies related to wars; of those who do not like movies related to history, only 30% like movies related to wars. Answer the following questions.

(a) If a person is randomly selected in the population, what is the chance that she likes both movies related to wars and movies related to history?

(b) If a person is randomly selected in the population, what is the chance that she likes movies related to wars?

(c) If a person is randomly selected and is discovered to like movies related to wars, what is the chance that she likes movies related to history?”

We can construct a turtleback diagram as the left panel of

Similarly, the right panel of

The lucky draw example is taken from the popular lucky draw game. This example is especially useful as many sampling without replacement problems can be converted to this and solved easily. Here we take a simplified version with the total number of tickets being 5 and there is only one prize ticket. The description is as follows.

“There are 5 tickets in a box with one being the prize ticket. 5 people each

randomly draws one ticket from the box without returning the drawn ticket to the box. Is this a fair game (i.e., each draws the prize ticket with the same chance)?”

labelled as “P”, which is 0.2. Following the figure, the probability of getting the prize ticket at the second draw is the area of the region labelled as “NP”, which is 0.8 × 25 % = 0.2 . Similarly, the probability of getting the prize ticket at the third draw is given by 0.8 × 75 % × 1 / 3 = 0.2 , and so on.

This can be viewed as an extension of the lucky draw problem in the sense that there are more than one prize tickets here. Note that this example mainly serves to demonstrate that both the tree and the turtleback diagram could be used to solve problems of such a complexity (one can solve this problem quickly by distinguishing the two green balls and apply result of the lucky draw game^{2}). Assume there are 2 greens balls and 3 red balls. The problem is described as follows.

“There are 2 green balls and 3 red balls in an urn. One randomly picks one ball for five times from the urn without returning. Will each draw have the same chance of getting the green ball?”

⋆ → G → R → G , ⋆ → R → R → G , ⋆ → R → G → G ,

which is (2/5)(3/4)(1/3) + (3/5)(1/2)(2/3) + (3/5)(1/2)(1/3) = 2/5. One can similarly calculate that the probability of getting a green ball at other draws all equal to 2/5.

“RGG”, “GRG”, “RRGG”, “RRGRG”,

which is

3 5 × 1 2 × 1 3 + 2 5 × 3 4 × 1 3 + 3 5 × 1 2 × 2 3 × 1 2 + 3 5 × 1 2 × 2 3 × 1 2 = 0.4.

The calculation seems a little tedious, but conceptually very simple, as long as one could follow the way the regions are partitioned.

We carried out case studies on over 200 students. This includes students in the elementary statistics class, STAT235 (non-calculus based), at University of Mis

souri Kansas City (UMKC) during 2012-2013, and students from elementary statistics, MTH231, and elementary probability, MTH331, classes at University of Massachusetts Dartmouth (UMassD) during 2015-2017. These three courses had a fairly different student population. For STAT235, about 30% from engineering, 30% from business, and the rest from such diverse majors as biology, chemistry, psychology, political sciences, education etc. For MTH231, about 80% are from mathematics or data science, and the rest from majors such as computer science, electrical engineering, criminal justice etc. For MTH331, about 75% from computer science or electrical engineering, 20% from mathematics or data science, and the rest from other engineering majors or economics, physics etc.

We collect two types of data from the case studies, one on students’ preference between graph and non-graph based approach, and the other on students’ preference between the turtleback and the tree diagram. Here, except for the case of non-graph based approach, by preference we mean the students actually used the technique for problem solving, and nearly in all such cases they could apply it correctly in solving the assigned problem; so we use this as measurement of learning outcome (with an understanding that further experiments may be needed to validate this). The results are reported in

In terms of a preference for which graphical tool, the results show an interesting pattern. For the “Lung disease and smoking” and the “War and history” example, more students prefer the turtleback diagram to the tree diagram, around 53% - 54% vs 33% - 34%. The “Lucky draw” and the “Urn model” examples exhibit an opposite pattern, more students prefer the tree diagram to the turtleback diagram, around 46% - 48% vs 31% - 34%^{3}. This is probably due to the fact that, in the first two examples, the sample spaces and events involve populations in the usual sense, while the last two examples involve sequential decisions, for which a tree structure that represents the decision dichotomy may be more natural (although in such cases, the concept of conditional probability is not as natural as that in the turtleback diagram). Further experiments are needed to confirm this. The advantage of the turtleback diagram over the tree diagram

Course | # students | Class size | Institute |
---|---|---|---|

STAT235 | 128 | 40 - 60 | UMKC |

MTH231 | 25 | 10 - 20 | UMassD |

MTH331 | 72 | 35 - 45 | UMassD |

Course | Lung disease and smoking | War and history movie | Lucky draw | Urn model |
---|---|---|---|---|

STAT235 | 66 | 62 | 62 | 66 |

MTH231 | 14 | 11 | 14 | 11 |

MTH331 | 37 | 35 | 35 | 37 |

Total | 117 | 108 | 111 | 114 |

Question | Neither helpful | Either one helpful | Prefer Turtleback | Prefer Tree |
---|---|---|---|---|

Lung disease and smoking | 13.7% | 86.3% | 53.0% | 33.3% |

War and history movies | 11.1% | 88.9% | 54.6% | 34.3% |

Lucky draw | 17.1% | 82.9% | 34.2% | 48.6% |

Urn model | 21.9% | 78.1% | 31.6% | 46.5% |

appears to decrease as the problem becomes harder, but this is not a serious problem for beginning students as those who most need help from a graphical representation are just those who could not solve simple problems. Moreover, we do not expect one single graphical tool can help solve all the problems, rather different people may use different tools for a particular problem.

Many instances of conditional probability occur in sampling without replacement. Tarr and Jones [

Research Question 1: Are turtleback diagrams, as compared to tree diagrams, helpful to students, at any or all of the Tarr-Jones framework levels, in understanding conditional probability. If so, how can we measure and assess the comparative utility of turtleback diagrams compared to tree diagrams?

Research Question 2: Related to Research Question 1, specifically, how helpful are turtleback diagrams in helping students understand conditional probability in the context of sampling without replacement?

Conditional probability is increasingly being introduced into middle school in the United States. The Conference Board of the Mathematical Sciences [

Of all the mathematical topics now appearing in middle grades curricula, teachers are least prepared to teach statistics and probability. Many prospective teachers have not encountered the fundamental ideas of modern statistics in their own K-12 mathematics courses... Even those who have had a statistics course probably have not seen material appropriate for inclusion in middle grades curricula. (p. 114)

Research Question 3: Are turtleback diagrams helpful to middle school teachers of probability and statistics in (a) enhancing their own understanding of conditional probability and (b) assisting them to better teach conditional probability? If so, how and to what extent?

Motivated by difficulties encountered by many undergraduate students new to statistics, we re-examined the definition and representation of conditional probability, and presented a Venn-diagram like approach: the turtleback diagram. We discussed our graphical tool in the context of other graphical models for conditional probability, and carried out case studies on over 200 students of elementary statistics or probability classes. Our case study results are encouraging and the graph-based approaches could potentially lead to significant improvements in both the students’ understanding of conditional probability and problem solving. While the existing tree diagram is preferred to the turtleback diagram on problems that involve a sequential decision, the turtleback diagram is considered more helpful in settings where the underlying population resembles the usual human population; it is exactly in such situations that weaker students are more likely to need help. Though the turtleback diagram appears very different from the tree diagram, we are able to unify them and show their equivalence in terms of semantics.

Our discussion suggests a simple framework for visualizing abstract concepts, that is, a suitable graph representation of the abstract concept followed by a simple post-processing in the visual-brain system. A good visualization idea needs to balance both. We are able to use such a framework to interpret the difficulty encountered by the tree diagram, and aid our development of the turtleback diagram. Further studies are expected to validate or to adopt such a framework to general visualization tasks. Given the increasingly important role played by data visualization in data science and exploratory data analysis [

Our case studies suggest that it is worthwhile to introduce such graphical tools to students whose success would seem to depend on them. We hope that this will benefit our statistics colleagues who are teaching elementary statistics and students who are struggling with the concept of conditional probability and its application to problem solving. The potential savings in time can be huge. As a conservative estimate, assume each year there are about 1.5 million bachelor’s degrees awarded in US (about 1.67 million awarded in 2009). Assume there are about 200,000 of them have taken an elementary statistics class, and about 10% of them need help and succeed with our proposed approach, and further assume an average class size of 40. If each instructor saves 2 hours of time in each elementary statistics class and each student who benefits from our approach saves 1 hour, then the estimated total amount of time saved is at least 30,000 hours per year in the U.S. alone.

The authors are grateful to Professor Yong Zeng at UMKC for kindly pointing to the “Lung disease and smoking” example, and for encouragement and support on some of the case studies.

The authors declare no conflicts of interest regarding the publication of this paper.

Yan, D.H. and Davis, G.E. (2018) The Turtleback Diagram for Conditional Probability. Open Journal of Statistics, 8, 684-705. https://doi.org/10.4236/ojs.2018.84045